Sep 13 01:55:37.003844 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 01:55:37.003877 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 01:55:37.003895 kernel: BIOS-provided physical RAM map: Sep 13 01:55:37.003909 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 01:55:37.003919 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 01:55:37.003928 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 01:55:37.003939 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 13 01:55:37.003949 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 13 01:55:37.003965 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 01:55:37.003975 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 01:55:37.003985 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 01:55:37.003995 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 01:55:37.004009 kernel: NX (Execute Disable) protection: active Sep 13 01:55:37.004019 kernel: APIC: Static calls initialized Sep 13 01:55:37.004031 kernel: SMBIOS 2.8 present. Sep 13 01:55:37.004042 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 13 01:55:37.004053 kernel: Hypervisor detected: KVM Sep 13 01:55:37.004068 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 01:55:37.004080 kernel: kvm-clock: using sched offset of 4444374212 cycles Sep 13 01:55:37.004091 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 01:55:37.004102 kernel: tsc: Detected 2799.998 MHz processor Sep 13 01:55:37.004113 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 01:55:37.004124 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 01:55:37.004135 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 13 01:55:37.004146 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 01:55:37.004156 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 01:55:37.004172 kernel: Using GB pages for direct mapping Sep 13 01:55:37.004183 kernel: ACPI: Early table checksum verification disabled Sep 13 01:55:37.004213 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 13 01:55:37.004224 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:55:37.004244 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:55:37.004256 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:55:37.004267 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 13 01:55:37.004278 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:55:37.004289 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:55:37.004306 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:55:37.004317 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:55:37.004328 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 13 01:55:37.004338 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 13 01:55:37.004349 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 13 01:55:37.004366 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 13 01:55:37.004378 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 13 01:55:37.004394 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 13 01:55:37.004405 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 13 01:55:37.004417 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 01:55:37.004428 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 01:55:37.004439 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 13 01:55:37.004451 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 13 01:55:37.004462 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 13 01:55:37.004477 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 13 01:55:37.004489 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 13 01:55:37.004500 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 13 01:55:37.004511 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 13 01:55:37.004522 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 13 01:55:37.004533 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 13 01:55:37.004544 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 13 01:55:37.004555 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 13 01:55:37.004565 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 13 01:55:37.004577 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 13 01:55:37.004592 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 13 01:55:37.004603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 01:55:37.004615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 01:55:37.004630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 13 01:55:37.004642 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 13 01:55:37.004653 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 13 01:55:37.004665 kernel: Zone ranges: Sep 13 01:55:37.004676 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 01:55:37.004688 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 13 01:55:37.004703 kernel: Normal empty Sep 13 01:55:37.004715 kernel: Movable zone start for each node Sep 13 01:55:37.004726 kernel: Early memory node ranges Sep 13 01:55:37.004737 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 01:55:37.004748 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 13 01:55:37.004760 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 13 01:55:37.004771 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 01:55:37.004782 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 01:55:37.004794 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 13 01:55:37.004814 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 01:55:37.004830 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 01:55:37.004842 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 01:55:37.004853 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 01:55:37.004864 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 01:55:37.004875 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 01:55:37.004887 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 01:55:37.004898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 01:55:37.004909 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 01:55:37.004921 kernel: TSC deadline timer available Sep 13 01:55:37.004937 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 13 01:55:37.004953 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 01:55:37.004964 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 01:55:37.004976 kernel: Booting paravirtualized kernel on KVM Sep 13 01:55:37.004987 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 01:55:37.004999 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 13 01:55:37.005010 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 13 01:55:37.005022 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 13 01:55:37.005035 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 01:55:37.005051 kernel: kvm-guest: PV spinlocks enabled Sep 13 01:55:37.005062 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 01:55:37.005075 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 01:55:37.005087 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:55:37.005098 kernel: random: crng init done Sep 13 01:55:37.005109 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:55:37.005121 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 01:55:37.005132 kernel: Fallback order for Node 0: 0 Sep 13 01:55:37.005148 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 13 01:55:37.005160 kernel: Policy zone: DMA32 Sep 13 01:55:37.005171 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:55:37.005182 kernel: software IO TLB: area num 16. Sep 13 01:55:37.005210 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 194820K reserved, 0K cma-reserved) Sep 13 01:55:37.005222 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 01:55:37.005242 kernel: Kernel/User page tables isolation: enabled Sep 13 01:55:37.005255 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 01:55:37.005267 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 01:55:37.005284 kernel: Dynamic Preempt: voluntary Sep 13 01:55:37.005296 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 01:55:37.005308 kernel: rcu: RCU event tracing is enabled. Sep 13 01:55:37.005320 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 01:55:37.005331 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 01:55:37.005354 kernel: Rude variant of Tasks RCU enabled. Sep 13 01:55:37.005371 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:55:37.005383 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:55:37.005395 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 01:55:37.005407 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 13 01:55:37.005418 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 01:55:37.005430 kernel: Console: colour VGA+ 80x25 Sep 13 01:55:37.005446 kernel: printk: console [tty0] enabled Sep 13 01:55:37.005459 kernel: printk: console [ttyS0] enabled Sep 13 01:55:37.005471 kernel: ACPI: Core revision 20230628 Sep 13 01:55:37.005482 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 01:55:37.005494 kernel: x2apic enabled Sep 13 01:55:37.005511 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 01:55:37.005523 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 13 01:55:37.005535 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Sep 13 01:55:37.005547 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 01:55:37.005559 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 01:55:37.005571 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 01:55:37.005583 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 01:55:37.005594 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 01:55:37.005606 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 01:55:37.005624 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 01:55:37.005640 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 01:55:37.005652 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 01:55:37.005664 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 01:55:37.005676 kernel: MMIO Stale Data: Unknown: No mitigations Sep 13 01:55:37.005688 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 01:55:37.005700 kernel: active return thunk: its_return_thunk Sep 13 01:55:37.005711 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 01:55:37.005723 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 01:55:37.005735 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 01:55:37.005763 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 01:55:37.005779 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 01:55:37.005791 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 01:55:37.005802 kernel: Freeing SMP alternatives memory: 32K Sep 13 01:55:37.005814 kernel: pid_max: default: 32768 minimum: 301 Sep 13 01:55:37.005825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 01:55:37.005837 kernel: landlock: Up and running. Sep 13 01:55:37.005848 kernel: SELinux: Initializing. Sep 13 01:55:37.005872 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:55:37.005884 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:55:37.005895 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 13 01:55:37.005907 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 01:55:37.005924 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 01:55:37.005936 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 01:55:37.005949 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 13 01:55:37.005961 kernel: signal: max sigframe size: 1776 Sep 13 01:55:37.005973 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:55:37.005985 kernel: rcu: Max phase no-delay instances is 400. Sep 13 01:55:37.005997 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 01:55:37.006009 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:55:37.006021 kernel: smpboot: x86: Booting SMP configuration: Sep 13 01:55:37.006044 kernel: .... node #0, CPUs: #1 Sep 13 01:55:37.006057 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 13 01:55:37.006069 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:55:37.006080 kernel: smpboot: Max logical packages: 16 Sep 13 01:55:37.006092 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Sep 13 01:55:37.006107 kernel: devtmpfs: initialized Sep 13 01:55:37.006118 kernel: x86/mm: Memory block size: 128MB Sep 13 01:55:37.006130 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:55:37.006143 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 01:55:37.006155 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:55:37.006171 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:55:37.006184 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:55:37.006196 kernel: audit: type=2000 audit(1757728535.033:1): state=initialized audit_enabled=0 res=1 Sep 13 01:55:37.006207 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:55:37.006241 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 01:55:37.006254 kernel: cpuidle: using governor menu Sep 13 01:55:37.006266 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:55:37.006278 kernel: dca service started, version 1.12.1 Sep 13 01:55:37.006290 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 01:55:37.006308 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 01:55:37.006320 kernel: PCI: Using configuration type 1 for base access Sep 13 01:55:37.006332 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 01:55:37.006344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:55:37.006357 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 01:55:37.006369 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:55:37.006380 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 01:55:37.006392 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:55:37.006409 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:55:37.006421 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:55:37.006433 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:55:37.006445 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 01:55:37.006457 kernel: ACPI: Interpreter enabled Sep 13 01:55:37.006468 kernel: ACPI: PM: (supports S0 S5) Sep 13 01:55:37.006480 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 01:55:37.006492 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 01:55:37.006504 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 01:55:37.006521 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 01:55:37.006533 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 01:55:37.006801 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 01:55:37.006982 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 01:55:37.007155 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 01:55:37.007173 kernel: PCI host bridge to bus 0000:00 Sep 13 01:55:37.007864 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 01:55:37.008025 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 01:55:37.008176 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 01:55:37.008366 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 13 01:55:37.008517 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 01:55:37.008669 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 13 01:55:37.008820 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 01:55:37.009008 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 01:55:37.009252 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 13 01:55:37.009424 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 13 01:55:37.009602 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 13 01:55:37.009774 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 13 01:55:37.009941 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 01:55:37.010125 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.010332 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 13 01:55:37.010511 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.010677 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 13 01:55:37.010872 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.011039 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 13 01:55:37.011229 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.011415 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 13 01:55:37.011604 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.011771 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 13 01:55:37.011947 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.012112 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 13 01:55:37.012315 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.012483 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 13 01:55:37.012685 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 01:55:37.012853 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 13 01:55:37.013033 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 01:55:37.013286 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 01:55:37.013451 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 13 01:55:37.013611 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 13 01:55:37.013778 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 13 01:55:37.013953 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 13 01:55:37.014115 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 01:55:37.014303 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 13 01:55:37.014475 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 13 01:55:37.014650 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 01:55:37.014819 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 01:55:37.016414 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 01:55:37.016599 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 13 01:55:37.016762 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 13 01:55:37.016941 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 01:55:37.017124 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 01:55:37.017328 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 13 01:55:37.017504 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 13 01:55:37.017676 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 13 01:55:37.017835 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 13 01:55:37.017992 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 01:55:37.018167 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 01:55:37.020443 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 13 01:55:37.020648 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 13 01:55:37.020831 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 13 01:55:37.021000 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 01:55:37.021345 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 01:55:37.021526 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 13 01:55:37.021696 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 13 01:55:37.021862 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 01:55:37.022038 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 01:55:37.022277 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 01:55:37.022452 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 13 01:55:37.022617 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 13 01:55:37.022780 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 01:55:37.022942 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 01:55:37.023106 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 13 01:55:37.025353 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 01:55:37.025536 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 01:55:37.025708 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 13 01:55:37.025871 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 01:55:37.026042 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 01:55:37.026268 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 13 01:55:37.026437 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 01:55:37.026606 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 01:55:37.026778 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 13 01:55:37.026959 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 01:55:37.027143 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 01:55:37.029386 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 13 01:55:37.029564 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 01:55:37.029730 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 01:55:37.029749 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 01:55:37.029762 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 01:55:37.029774 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 01:55:37.029786 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 01:55:37.029807 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 01:55:37.029819 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 01:55:37.029831 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 01:55:37.029844 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 01:55:37.029856 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 01:55:37.029868 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 01:55:37.029880 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 01:55:37.029892 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 01:55:37.029904 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 01:55:37.029921 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 01:55:37.029934 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 01:55:37.029946 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 01:55:37.029958 kernel: iommu: Default domain type: Translated Sep 13 01:55:37.029970 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 01:55:37.029983 kernel: PCI: Using ACPI for IRQ routing Sep 13 01:55:37.029995 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 01:55:37.030007 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 01:55:37.030018 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 13 01:55:37.030198 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 01:55:37.030385 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 01:55:37.030546 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 01:55:37.030564 kernel: vgaarb: loaded Sep 13 01:55:37.030577 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 01:55:37.030589 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:55:37.030602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:55:37.030614 kernel: pnp: PnP ACPI init Sep 13 01:55:37.030801 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 01:55:37.030821 kernel: pnp: PnP ACPI: found 5 devices Sep 13 01:55:37.030834 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 01:55:37.030846 kernel: NET: Registered PF_INET protocol family Sep 13 01:55:37.030859 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:55:37.030871 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 01:55:37.030883 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:55:37.030895 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 01:55:37.030914 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 01:55:37.030927 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 01:55:37.030939 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:55:37.030951 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:55:37.030972 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:55:37.030984 kernel: NET: Registered PF_XDP protocol family Sep 13 01:55:37.031144 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 13 01:55:37.033359 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 01:55:37.033548 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 01:55:37.033717 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 01:55:37.033911 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 01:55:37.034073 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 01:55:37.035846 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 01:55:37.036026 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 01:55:37.036222 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 01:55:37.036401 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 01:55:37.036564 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 01:55:37.036728 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 13 01:55:37.036888 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 13 01:55:37.037045 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 13 01:55:37.037219 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 13 01:55:37.037397 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 13 01:55:37.037598 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 13 01:55:37.037774 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 01:55:37.037937 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 13 01:55:37.038101 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 01:55:37.040314 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 13 01:55:37.040480 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 01:55:37.040640 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 13 01:55:37.040800 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 01:55:37.040969 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 01:55:37.041131 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 01:55:37.041330 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 13 01:55:37.041498 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 01:55:37.041661 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 01:55:37.041827 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 01:55:37.041999 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 13 01:55:37.042158 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 01:55:37.044358 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 01:55:37.044524 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 01:55:37.044690 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 13 01:55:37.044852 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 01:55:37.045019 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 01:55:37.045182 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 01:55:37.045371 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 13 01:55:37.045543 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 01:55:37.045708 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 01:55:37.045874 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 01:55:37.046040 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 13 01:55:37.048241 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 01:55:37.048417 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 01:55:37.048578 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 01:55:37.048742 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 13 01:55:37.048900 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 01:55:37.049060 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 01:55:37.051254 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 01:55:37.051417 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 01:55:37.051564 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 01:55:37.051709 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 01:55:37.051865 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 13 01:55:37.052011 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 01:55:37.052156 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 13 01:55:37.052357 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 01:55:37.052513 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 13 01:55:37.052667 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 01:55:37.052837 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 13 01:55:37.053011 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 13 01:55:37.053166 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 13 01:55:37.053346 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 01:55:37.053513 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 13 01:55:37.053668 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 13 01:55:37.053821 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 01:55:37.053999 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 13 01:55:37.054165 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 13 01:55:37.056356 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 01:55:37.056531 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 13 01:55:37.056685 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 13 01:55:37.056835 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 01:55:37.056997 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 13 01:55:37.057157 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 13 01:55:37.057335 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 01:55:37.057500 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 13 01:55:37.057652 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 13 01:55:37.057803 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 01:55:37.057968 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 13 01:55:37.058126 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 13 01:55:37.060351 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 01:55:37.060373 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 01:55:37.060387 kernel: PCI: CLS 0 bytes, default 64 Sep 13 01:55:37.060400 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 01:55:37.060413 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 13 01:55:37.060426 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 01:55:37.060439 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 13 01:55:37.060452 kernel: Initialise system trusted keyrings Sep 13 01:55:37.060471 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 01:55:37.060485 kernel: Key type asymmetric registered Sep 13 01:55:37.060497 kernel: Asymmetric key parser 'x509' registered Sep 13 01:55:37.060510 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 01:55:37.060523 kernel: io scheduler mq-deadline registered Sep 13 01:55:37.060536 kernel: io scheduler kyber registered Sep 13 01:55:37.060553 kernel: io scheduler bfq registered Sep 13 01:55:37.060718 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 13 01:55:37.060883 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 13 01:55:37.061050 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.061228 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 13 01:55:37.061402 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 13 01:55:37.061563 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.061729 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 13 01:55:37.061892 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 13 01:55:37.062160 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.064405 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 13 01:55:37.064570 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 13 01:55:37.064730 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.064895 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 13 01:55:37.065081 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 13 01:55:37.067455 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.067625 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 13 01:55:37.067788 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 13 01:55:37.067949 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.068112 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 13 01:55:37.068326 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 13 01:55:37.068497 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.068659 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 13 01:55:37.068818 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 13 01:55:37.068977 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:55:37.068997 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 01:55:37.069011 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 01:55:37.069024 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 01:55:37.069044 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:55:37.069058 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 01:55:37.069071 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 01:55:37.069083 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 01:55:37.069096 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 01:55:37.069308 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 01:55:37.069329 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 01:55:37.069478 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 01:55:37.069639 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T01:55:36 UTC (1757728536) Sep 13 01:55:37.069792 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 01:55:37.069811 kernel: intel_pstate: CPU model not supported Sep 13 01:55:37.069824 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:55:37.069837 kernel: Segment Routing with IPv6 Sep 13 01:55:37.069850 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:55:37.069863 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:55:37.069876 kernel: Key type dns_resolver registered Sep 13 01:55:37.069895 kernel: IPI shorthand broadcast: enabled Sep 13 01:55:37.069908 kernel: sched_clock: Marking stable (1124072406, 218848956)->(1562711853, -219790491) Sep 13 01:55:37.069921 kernel: registered taskstats version 1 Sep 13 01:55:37.069934 kernel: Loading compiled-in X.509 certificates Sep 13 01:55:37.069947 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 01:55:37.069971 kernel: Key type .fscrypt registered Sep 13 01:55:37.069984 kernel: Key type fscrypt-provisioning registered Sep 13 01:55:37.069996 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:55:37.070009 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:55:37.070038 kernel: ima: No architecture policies found Sep 13 01:55:37.070051 kernel: clk: Disabling unused clocks Sep 13 01:55:37.070064 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 01:55:37.070076 kernel: Write protecting the kernel read-only data: 36864k Sep 13 01:55:37.070089 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 01:55:37.070102 kernel: Run /init as init process Sep 13 01:55:37.070119 kernel: with arguments: Sep 13 01:55:37.070132 kernel: /init Sep 13 01:55:37.070144 kernel: with environment: Sep 13 01:55:37.070162 kernel: HOME=/ Sep 13 01:55:37.070174 kernel: TERM=linux Sep 13 01:55:37.070187 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:55:37.070202 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 01:55:37.070242 systemd[1]: Detected virtualization kvm. Sep 13 01:55:37.070258 systemd[1]: Detected architecture x86-64. Sep 13 01:55:37.070271 systemd[1]: Running in initrd. Sep 13 01:55:37.070284 systemd[1]: No hostname configured, using default hostname. Sep 13 01:55:37.070304 systemd[1]: Hostname set to . Sep 13 01:55:37.070317 systemd[1]: Initializing machine ID from VM UUID. Sep 13 01:55:37.070331 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:55:37.070344 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 01:55:37.070358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 01:55:37.070372 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 01:55:37.070385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 01:55:37.070399 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 01:55:37.070418 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 01:55:37.070434 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 01:55:37.070448 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 01:55:37.070462 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 01:55:37.070475 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 01:55:37.070488 systemd[1]: Reached target paths.target - Path Units. Sep 13 01:55:37.070502 systemd[1]: Reached target slices.target - Slice Units. Sep 13 01:55:37.070521 systemd[1]: Reached target swap.target - Swaps. Sep 13 01:55:37.070534 systemd[1]: Reached target timers.target - Timer Units. Sep 13 01:55:37.070548 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 01:55:37.070561 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 01:55:37.070575 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 01:55:37.070589 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 01:55:37.070602 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 01:55:37.070616 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 01:55:37.070635 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 01:55:37.070649 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 01:55:37.070662 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 01:55:37.070676 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 01:55:37.070689 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 01:55:37.070703 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:55:37.070717 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 01:55:37.070730 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 01:55:37.070744 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:55:37.070805 systemd-journald[202]: Collecting audit messages is disabled. Sep 13 01:55:37.070837 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 01:55:37.070852 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 01:55:37.070865 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:55:37.070886 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 01:55:37.070900 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:55:37.070913 kernel: Bridge firewalling registered Sep 13 01:55:37.070927 systemd-journald[202]: Journal started Sep 13 01:55:37.070956 systemd-journald[202]: Runtime Journal (/run/log/journal/9f4d36ea2d004c68ac3ae137bad549ab) is 4.7M, max 38.0M, 33.2M free. Sep 13 01:55:37.018003 systemd-modules-load[203]: Inserted module 'overlay' Sep 13 01:55:37.122770 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 01:55:37.067392 systemd-modules-load[203]: Inserted module 'br_netfilter' Sep 13 01:55:37.123809 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 01:55:37.125016 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:55:37.126507 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 01:55:37.141424 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 01:55:37.151442 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 01:55:37.154365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 01:55:37.158406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 01:55:37.172796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:55:37.183813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 01:55:37.187299 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:55:37.196497 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 01:55:37.198624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 01:55:37.203404 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 01:55:37.214980 dracut-cmdline[233]: dracut-dracut-053 Sep 13 01:55:37.219747 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 01:55:37.253249 systemd-resolved[238]: Positive Trust Anchors: Sep 13 01:55:37.253270 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:55:37.253318 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 01:55:37.257534 systemd-resolved[238]: Defaulting to hostname 'linux'. Sep 13 01:55:37.259247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 01:55:37.262669 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 01:55:37.335255 kernel: SCSI subsystem initialized Sep 13 01:55:37.346208 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:55:37.359278 kernel: iscsi: registered transport (tcp) Sep 13 01:55:37.384594 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:55:37.384662 kernel: QLogic iSCSI HBA Driver Sep 13 01:55:37.435904 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 01:55:37.442403 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 01:55:37.491589 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:55:37.491674 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:55:37.492367 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 01:55:37.540289 kernel: raid6: sse2x4 gen() 14226 MB/s Sep 13 01:55:37.558267 kernel: raid6: sse2x2 gen() 9572 MB/s Sep 13 01:55:37.576814 kernel: raid6: sse2x1 gen() 10456 MB/s Sep 13 01:55:37.576874 kernel: raid6: using algorithm sse2x4 gen() 14226 MB/s Sep 13 01:55:37.595849 kernel: raid6: .... xor() 8214 MB/s, rmw enabled Sep 13 01:55:37.595928 kernel: raid6: using ssse3x2 recovery algorithm Sep 13 01:55:37.621283 kernel: xor: automatically using best checksumming function avx Sep 13 01:55:37.803485 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 01:55:37.817052 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 01:55:37.825422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 01:55:37.842729 systemd-udevd[419]: Using default interface naming scheme 'v255'. Sep 13 01:55:37.849427 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 01:55:37.860184 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 01:55:37.878303 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Sep 13 01:55:37.916064 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 01:55:37.923402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 01:55:38.028272 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 01:55:38.038382 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 01:55:38.064427 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 01:55:38.068789 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 01:55:38.071596 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 01:55:38.073895 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 01:55:38.080277 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 01:55:38.112692 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 01:55:38.168233 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 13 01:55:38.179488 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:55:38.197234 kernel: ACPI: bus type USB registered Sep 13 01:55:38.205242 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 01:55:38.207495 kernel: usbcore: registered new interface driver usbfs Sep 13 01:55:38.209219 kernel: usbcore: registered new interface driver hub Sep 13 01:55:38.214214 kernel: usbcore: registered new device driver usb Sep 13 01:55:38.223229 kernel: AVX version of gcm_enc/dec engaged. Sep 13 01:55:38.232499 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 01:55:38.232536 kernel: GPT:17805311 != 125829119 Sep 13 01:55:38.232559 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 01:55:38.232575 kernel: GPT:17805311 != 125829119 Sep 13 01:55:38.232590 kernel: AES CTR mode by8 optimization enabled Sep 13 01:55:38.232607 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 01:55:38.232634 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:55:38.241057 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:55:38.242384 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:55:38.243384 kernel: libata version 3.00 loaded. Sep 13 01:55:38.246475 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 01:55:38.247247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:55:38.247547 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:55:38.249712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:55:38.260507 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:55:38.269257 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 01:55:38.271253 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 01:55:38.275784 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 01:55:38.276015 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 01:55:38.289223 kernel: scsi host0: ahci Sep 13 01:55:38.293220 kernel: scsi host1: ahci Sep 13 01:55:38.296215 kernel: scsi host2: ahci Sep 13 01:55:38.299216 kernel: scsi host3: ahci Sep 13 01:55:38.301214 kernel: scsi host4: ahci Sep 13 01:55:38.306627 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (467) Sep 13 01:55:38.306671 kernel: scsi host5: ahci Sep 13 01:55:38.309207 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (464) Sep 13 01:55:38.313291 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Sep 13 01:55:38.313323 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Sep 13 01:55:38.313342 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Sep 13 01:55:38.313358 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Sep 13 01:55:38.313373 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Sep 13 01:55:38.313389 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Sep 13 01:55:38.334649 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 01:55:38.410930 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:55:38.427866 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 01:55:38.428690 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 01:55:38.441553 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 01:55:38.448316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 01:55:38.456419 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 01:55:38.460377 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 01:55:38.468598 disk-uuid[562]: Primary Header is updated. Sep 13 01:55:38.468598 disk-uuid[562]: Secondary Entries is updated. Sep 13 01:55:38.468598 disk-uuid[562]: Secondary Header is updated. Sep 13 01:55:38.475306 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:55:38.486662 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:55:38.492264 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:55:38.495824 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:55:38.626381 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 01:55:38.626450 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 01:55:38.626692 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 01:55:38.631511 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 01:55:38.631563 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 01:55:38.633252 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 01:55:38.643220 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 13 01:55:38.644731 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 13 01:55:38.651247 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 01:55:38.654534 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 13 01:55:38.658248 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 13 01:55:38.658486 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 13 01:55:38.664346 kernel: hub 1-0:1.0: USB hub found Sep 13 01:55:38.664839 kernel: hub 1-0:1.0: 4 ports detected Sep 13 01:55:38.673225 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 01:55:38.681224 kernel: hub 2-0:1.0: USB hub found Sep 13 01:55:38.687216 kernel: hub 2-0:1.0: 4 ports detected Sep 13 01:55:38.904281 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 01:55:39.045226 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 01:55:39.051536 kernel: usbcore: registered new interface driver usbhid Sep 13 01:55:39.051573 kernel: usbhid: USB HID core driver Sep 13 01:55:39.058514 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 13 01:55:39.058552 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 13 01:55:39.485510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:55:39.486142 disk-uuid[563]: The operation has completed successfully. Sep 13 01:55:39.537029 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:55:39.537250 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 01:55:39.561428 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 01:55:39.565156 sh[587]: Success Sep 13 01:55:39.580239 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 13 01:55:39.638653 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 01:55:39.649339 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 01:55:39.653609 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 01:55:39.675636 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 01:55:39.675691 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:55:39.677692 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 01:55:39.680928 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 01:55:39.680965 kernel: BTRFS info (device dm-0): using free space tree Sep 13 01:55:39.691976 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 01:55:39.693489 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 01:55:39.699386 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 01:55:39.703375 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 01:55:39.718460 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:55:39.718506 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:55:39.718526 kernel: BTRFS info (device vda6): using free space tree Sep 13 01:55:39.727253 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 01:55:39.741212 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:55:39.743747 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:55:39.751324 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 01:55:39.759372 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 01:55:39.855151 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 01:55:39.864420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 01:55:39.908431 ignition[681]: Ignition 2.19.0 Sep 13 01:55:39.908456 ignition[681]: Stage: fetch-offline Sep 13 01:55:39.912815 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 01:55:39.908551 ignition[681]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:55:39.908571 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:55:39.916817 systemd-networkd[770]: lo: Link UP Sep 13 01:55:39.908765 ignition[681]: parsed url from cmdline: "" Sep 13 01:55:39.916823 systemd-networkd[770]: lo: Gained carrier Sep 13 01:55:39.908772 ignition[681]: no config URL provided Sep 13 01:55:39.919866 systemd-networkd[770]: Enumeration completed Sep 13 01:55:39.908782 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:55:39.920151 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 01:55:39.908798 ignition[681]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:55:39.920524 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:55:39.908807 ignition[681]: failed to fetch config: resource requires networking Sep 13 01:55:39.920530 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:55:39.909123 ignition[681]: Ignition finished successfully Sep 13 01:55:39.924844 systemd-networkd[770]: eth0: Link UP Sep 13 01:55:39.924854 systemd-networkd[770]: eth0: Gained carrier Sep 13 01:55:39.924870 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:55:39.926226 systemd[1]: Reached target network.target - Network. Sep 13 01:55:39.934616 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 01:55:39.945294 systemd-networkd[770]: eth0: DHCPv4 address 10.230.52.214/30, gateway 10.230.52.213 acquired from 10.230.52.213 Sep 13 01:55:39.959493 ignition[777]: Ignition 2.19.0 Sep 13 01:55:39.960182 ignition[777]: Stage: fetch Sep 13 01:55:39.960482 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:55:39.960503 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:55:39.960667 ignition[777]: parsed url from cmdline: "" Sep 13 01:55:39.960674 ignition[777]: no config URL provided Sep 13 01:55:39.960683 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:55:39.960699 ignition[777]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:55:39.961846 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 13 01:55:39.961892 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 13 01:55:39.962018 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 13 01:55:39.978614 ignition[777]: GET result: OK Sep 13 01:55:39.978772 ignition[777]: parsing config with SHA512: cbcc9e4918c989460b069481cf62348179269bb5d5af215f3a79fbf30785c4be412f4a2e470ec7159be7b4e5db6a716304db2a3810121481baaa5102ba2cda94 Sep 13 01:55:39.984208 unknown[777]: fetched base config from "system" Sep 13 01:55:39.985458 unknown[777]: fetched base config from "system" Sep 13 01:55:39.985487 unknown[777]: fetched user config from "openstack" Sep 13 01:55:39.986034 ignition[777]: fetch: fetch complete Sep 13 01:55:39.986043 ignition[777]: fetch: fetch passed Sep 13 01:55:39.987939 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 01:55:39.986111 ignition[777]: Ignition finished successfully Sep 13 01:55:40.001394 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 01:55:40.020660 ignition[785]: Ignition 2.19.0 Sep 13 01:55:40.020682 ignition[785]: Stage: kargs Sep 13 01:55:40.020916 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:55:40.020936 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:55:40.021941 ignition[785]: kargs: kargs passed Sep 13 01:55:40.024651 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 01:55:40.022016 ignition[785]: Ignition finished successfully Sep 13 01:55:40.031409 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 01:55:40.053764 ignition[791]: Ignition 2.19.0 Sep 13 01:55:40.053785 ignition[791]: Stage: disks Sep 13 01:55:40.054036 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:55:40.054057 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:55:40.055150 ignition[791]: disks: disks passed Sep 13 01:55:40.055250 ignition[791]: Ignition finished successfully Sep 13 01:55:40.059431 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 01:55:40.060892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 01:55:40.062045 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 01:55:40.063488 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 01:55:40.064901 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 01:55:40.066382 systemd[1]: Reached target basic.target - Basic System. Sep 13 01:55:40.073400 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 01:55:40.093566 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 01:55:40.097802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 01:55:40.102333 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 01:55:40.211239 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 01:55:40.211203 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 01:55:40.212508 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 01:55:40.218292 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 01:55:40.221034 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 01:55:40.223928 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 01:55:40.229372 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 13 01:55:40.231792 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:55:40.233330 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 01:55:40.239213 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Sep 13 01:55:40.244901 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:55:40.244935 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:55:40.244958 kernel: BTRFS info (device vda6): using free space tree Sep 13 01:55:40.246774 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 01:55:40.254164 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 01:55:40.255027 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 01:55:40.262398 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 01:55:40.326498 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:55:40.335224 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:55:40.343128 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:55:40.350117 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:55:40.448556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 01:55:40.454313 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 01:55:40.456403 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 01:55:40.476210 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:55:40.495314 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 01:55:40.505404 ignition[927]: INFO : Ignition 2.19.0 Sep 13 01:55:40.505404 ignition[927]: INFO : Stage: mount Sep 13 01:55:40.507779 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 01:55:40.507779 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:55:40.507779 ignition[927]: INFO : mount: mount passed Sep 13 01:55:40.507779 ignition[927]: INFO : Ignition finished successfully Sep 13 01:55:40.508605 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 01:55:40.673993 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 01:55:41.828528 systemd-networkd[770]: eth0: Gained IPv6LL Sep 13 01:55:42.542051 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d35:24:19ff:fee6:34d6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d35:24:19ff:fee6:34d6/64 assigned by NDisc. Sep 13 01:55:42.542064 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 13 01:55:47.383286 coreos-metadata[810]: Sep 13 01:55:47.383 WARN failed to locate config-drive, using the metadata service API instead Sep 13 01:55:47.405229 coreos-metadata[810]: Sep 13 01:55:47.405 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 13 01:55:47.422058 coreos-metadata[810]: Sep 13 01:55:47.421 INFO Fetch successful Sep 13 01:55:47.423938 coreos-metadata[810]: Sep 13 01:55:47.422 INFO wrote hostname srv-vx6h6.gb1.brightbox.com to /sysroot/etc/hostname Sep 13 01:55:47.426605 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 13 01:55:47.426774 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 13 01:55:47.434305 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 01:55:47.460660 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 01:55:47.487701 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Sep 13 01:55:47.493848 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:55:47.493902 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:55:47.493922 kernel: BTRFS info (device vda6): using free space tree Sep 13 01:55:47.498213 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 01:55:47.501739 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 01:55:47.532899 ignition[961]: INFO : Ignition 2.19.0 Sep 13 01:55:47.532899 ignition[961]: INFO : Stage: files Sep 13 01:55:47.534843 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 01:55:47.534843 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:55:47.534843 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:55:47.537773 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:55:47.537773 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:55:47.540095 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:55:47.540095 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:55:47.540095 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:55:47.539489 unknown[961]: wrote ssh authorized keys file for user: core Sep 13 01:55:47.544435 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 01:55:47.544435 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 01:55:47.708016 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 01:55:48.258231 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 01:55:48.258231 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:55:48.258231 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:55:48.258231 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:55:48.258231 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:55:48.258231 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:55:48.258231 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 01:55:48.272333 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 01:55:48.621968 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 13 01:55:49.737246 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 01:55:49.737246 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 13 01:55:49.740000 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:55:49.740000 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:55:49.740000 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 13 01:55:49.740000 ignition[961]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:55:49.740000 ignition[961]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:55:49.740000 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:55:49.740000 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:55:49.750539 ignition[961]: INFO : files: files passed Sep 13 01:55:49.750539 ignition[961]: INFO : Ignition finished successfully Sep 13 01:55:49.743351 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 01:55:49.762611 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 01:55:49.766403 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 01:55:49.767593 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:55:49.769521 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 01:55:49.781528 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:55:49.783321 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:55:49.784436 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:55:49.786569 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 01:55:49.787900 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 01:55:49.800529 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 01:55:49.828909 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:55:49.831699 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 01:55:49.835614 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 01:55:49.836906 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 01:55:49.837642 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 01:55:49.844434 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 01:55:49.862711 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 01:55:49.866404 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 01:55:49.883558 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 01:55:49.884540 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 01:55:49.886243 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 01:55:49.887641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:55:49.887822 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 01:55:49.889500 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 01:55:49.890367 systemd[1]: Stopped target basic.target - Basic System. Sep 13 01:55:49.891789 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 01:55:49.893259 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 01:55:49.894637 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 01:55:49.896153 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 01:55:49.897700 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 01:55:49.899258 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 01:55:49.900657 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 01:55:49.902152 systemd[1]: Stopped target swap.target - Swaps. Sep 13 01:55:49.903463 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:55:49.903672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 01:55:49.905479 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 01:55:49.906494 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 01:55:49.907867 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 01:55:49.908072 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 01:55:49.909315 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:55:49.909466 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 01:55:49.911507 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:55:49.911669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 01:55:49.912582 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:55:49.912728 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 01:55:49.921473 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 01:55:49.922604 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:55:49.922786 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 01:55:49.925387 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 01:55:49.928657 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:55:49.928855 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 01:55:49.932704 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:55:49.933034 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 01:55:49.946563 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:55:49.946750 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 01:55:49.960221 ignition[1014]: INFO : Ignition 2.19.0 Sep 13 01:55:49.960221 ignition[1014]: INFO : Stage: umount Sep 13 01:55:49.960221 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 01:55:49.960221 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:55:49.965092 ignition[1014]: INFO : umount: umount passed Sep 13 01:55:49.965092 ignition[1014]: INFO : Ignition finished successfully Sep 13 01:55:49.966654 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:55:49.966858 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 01:55:49.970880 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:55:49.971022 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 01:55:49.973065 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:55:49.973132 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 01:55:49.975278 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 01:55:49.975374 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 01:55:49.976795 systemd[1]: Stopped target network.target - Network. Sep 13 01:55:49.978050 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:55:49.978129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 01:55:49.979520 systemd[1]: Stopped target paths.target - Path Units. Sep 13 01:55:49.980673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:55:49.986280 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 01:55:49.987099 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 01:55:49.989028 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 01:55:49.990448 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:55:49.990555 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 01:55:49.991757 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:55:49.991822 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 01:55:49.993032 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:55:49.993116 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 01:55:49.994554 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 01:55:49.994628 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 01:55:49.996117 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 01:55:49.998011 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 01:55:50.001661 systemd-networkd[770]: eth0: DHCPv6 lease lost Sep 13 01:55:50.007649 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:55:50.008570 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:55:50.008794 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 01:55:50.012400 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:55:50.012660 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 01:55:50.015627 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:55:50.015720 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 01:55:50.022433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 01:55:50.025503 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:55:50.025602 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 01:55:50.027099 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:55:50.027167 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:55:50.032742 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:55:50.032838 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 01:55:50.034319 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 01:55:50.034387 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 01:55:50.035976 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 01:55:50.049902 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:55:50.050177 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 01:55:50.052552 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:55:50.052648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 01:55:50.054692 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:55:50.054754 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 01:55:50.056128 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:55:50.056239 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 01:55:50.058375 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:55:50.058456 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 01:55:50.059882 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:55:50.059968 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:55:50.071477 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 01:55:50.074562 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 01:55:50.074659 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 01:55:50.076298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:55:50.076364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:55:50.078287 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:55:50.078436 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 01:55:50.083936 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:55:50.084082 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 01:55:50.124871 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:55:50.125081 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 01:55:50.127159 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 01:55:50.128713 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:55:50.128823 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 01:55:50.135400 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 01:55:50.147358 systemd[1]: Switching root. Sep 13 01:55:50.193411 systemd-journald[202]: Journal stopped Sep 13 01:55:51.645825 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Sep 13 01:55:51.645945 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:55:51.645976 kernel: SELinux: policy capability open_perms=1 Sep 13 01:55:51.645996 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:55:51.646064 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:55:51.646090 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:55:51.646116 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:55:51.646135 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:55:51.646158 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:55:51.649229 kernel: audit: type=1403 audit(1757728550.449:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:55:51.649261 systemd[1]: Successfully loaded SELinux policy in 58.779ms. Sep 13 01:55:51.649297 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.805ms. Sep 13 01:55:51.649324 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 01:55:51.649344 systemd[1]: Detected virtualization kvm. Sep 13 01:55:51.649363 systemd[1]: Detected architecture x86-64. Sep 13 01:55:51.649392 systemd[1]: Detected first boot. Sep 13 01:55:51.649424 systemd[1]: Hostname set to . Sep 13 01:55:51.649443 systemd[1]: Initializing machine ID from VM UUID. Sep 13 01:55:51.649515 zram_generator::config[1056]: No configuration found. Sep 13 01:55:51.649552 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:55:51.649574 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 01:55:51.649593 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 01:55:51.649612 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 01:55:51.649632 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 01:55:51.649651 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 01:55:51.649669 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 01:55:51.649706 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 01:55:51.649727 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 01:55:51.649746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 01:55:51.649765 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 01:55:51.649785 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 01:55:51.649812 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 01:55:51.649833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 01:55:51.649852 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 01:55:51.649883 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 01:55:51.649918 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 01:55:51.649939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 01:55:51.649959 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 01:55:51.649977 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 01:55:51.649996 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 01:55:51.650016 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 01:55:51.650048 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 01:55:51.650069 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 01:55:51.650089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 01:55:51.650139 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 01:55:51.650170 systemd[1]: Reached target slices.target - Slice Units. Sep 13 01:55:51.650222 systemd[1]: Reached target swap.target - Swaps. Sep 13 01:55:51.650245 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 01:55:51.650265 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 01:55:51.650293 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 01:55:51.650326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 01:55:51.650362 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 01:55:51.650404 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 01:55:51.650454 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 01:55:51.650478 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 01:55:51.650523 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 01:55:51.650582 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:51.650662 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 01:55:51.650688 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 01:55:51.650709 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 01:55:51.650729 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:55:51.650798 systemd[1]: Reached target machines.target - Containers. Sep 13 01:55:51.650845 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 01:55:51.650884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:55:51.650922 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 01:55:51.650944 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 01:55:51.650964 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 01:55:51.650983 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 01:55:51.651003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 01:55:51.651022 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 01:55:51.651041 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 01:55:51.651091 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:55:51.651150 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 01:55:51.651173 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 01:55:51.652835 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 01:55:51.652874 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 01:55:51.652942 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 01:55:51.652966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 01:55:51.652986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 01:55:51.653029 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 01:55:51.653055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 01:55:51.653091 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 01:55:51.653114 systemd[1]: Stopped verity-setup.service. Sep 13 01:55:51.653135 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:51.653154 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 01:55:51.653181 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 01:55:51.653224 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 01:55:51.653245 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 01:55:51.653279 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 01:55:51.653307 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 01:55:51.653328 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 01:55:51.653347 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:55:51.653367 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 01:55:51.653386 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:55:51.653406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 01:55:51.653437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:55:51.653459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 01:55:51.653492 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 01:55:51.653546 systemd-journald[1145]: Collecting audit messages is disabled. Sep 13 01:55:51.653597 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 01:55:51.653619 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 01:55:51.653639 systemd-journald[1145]: Journal started Sep 13 01:55:51.653675 systemd-journald[1145]: Runtime Journal (/run/log/journal/9f4d36ea2d004c68ac3ae137bad549ab) is 4.7M, max 38.0M, 33.2M free. Sep 13 01:55:51.267803 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:55:51.290668 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 01:55:51.291429 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 01:55:51.657235 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 01:55:51.674416 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 01:55:51.682456 kernel: fuse: init (API version 7.39) Sep 13 01:55:51.687344 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 01:55:51.689242 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:55:51.689308 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 01:55:51.692777 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 01:55:51.710490 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 01:55:51.717418 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 01:55:51.718689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:55:51.730782 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 01:55:51.742485 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 01:55:51.743317 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:55:51.752456 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 01:55:51.769153 kernel: loop: module loaded Sep 13 01:55:51.767408 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 01:55:51.775281 kernel: ACPI: bus type drm_connector registered Sep 13 01:55:51.777409 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 01:55:51.781264 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 01:55:51.783723 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:55:51.783961 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 01:55:51.785134 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:55:51.786396 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 01:55:51.787500 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:55:51.788079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 01:55:51.789437 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 01:55:51.799882 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 01:55:51.806248 systemd-journald[1145]: Time spent on flushing to /var/log/journal/9f4d36ea2d004c68ac3ae137bad549ab is 102.393ms for 1135 entries. Sep 13 01:55:51.806248 systemd-journald[1145]: System Journal (/var/log/journal/9f4d36ea2d004c68ac3ae137bad549ab) is 8.0M, max 584.8M, 576.8M free. Sep 13 01:55:51.929210 systemd-journald[1145]: Received client request to flush runtime journal. Sep 13 01:55:51.929293 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 01:55:51.930093 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:55:51.809011 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 01:55:51.818372 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 01:55:51.833634 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 01:55:51.849637 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 01:55:51.851427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 01:55:51.860381 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 01:55:51.864709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 01:55:51.934356 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 01:55:51.949645 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:55:51.960418 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:55:51.964249 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 01:55:51.979286 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 01:55:52.023717 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 01:55:52.035393 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 01:55:52.061221 kernel: loop2: detected capacity change from 0 to 8 Sep 13 01:55:52.102360 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 01:55:52.111701 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 01:55:52.119236 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 01:55:52.122840 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Sep 13 01:55:52.122877 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Sep 13 01:55:52.152671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 01:55:52.170369 kernel: loop4: detected capacity change from 0 to 140768 Sep 13 01:55:52.177778 udevadm[1211]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 01:55:52.197049 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 01:55:52.222531 kernel: loop6: detected capacity change from 0 to 8 Sep 13 01:55:52.234692 kernel: loop7: detected capacity change from 0 to 142488 Sep 13 01:55:52.260711 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 13 01:55:52.263410 (sd-merge)[1214]: Merged extensions into '/usr'. Sep 13 01:55:52.283018 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 01:55:52.283046 systemd[1]: Reloading... Sep 13 01:55:52.453212 zram_generator::config[1240]: No configuration found. Sep 13 01:55:52.457020 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:55:52.648392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:55:52.714049 systemd[1]: Reloading finished in 430 ms. Sep 13 01:55:52.742950 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 01:55:52.744611 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 01:55:52.762657 systemd[1]: Starting ensure-sysext.service... Sep 13 01:55:52.765562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 01:55:52.792143 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Sep 13 01:55:52.792175 systemd[1]: Reloading... Sep 13 01:55:52.828011 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:55:52.828620 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 01:55:52.833030 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:55:52.833437 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Sep 13 01:55:52.833543 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Sep 13 01:55:52.842903 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 01:55:52.842921 systemd-tmpfiles[1297]: Skipping /boot Sep 13 01:55:52.863892 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 01:55:52.863911 systemd-tmpfiles[1297]: Skipping /boot Sep 13 01:55:52.907228 zram_generator::config[1324]: No configuration found. Sep 13 01:55:53.070130 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:55:53.135585 systemd[1]: Reloading finished in 342 ms. Sep 13 01:55:53.158615 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 01:55:53.166802 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 01:55:53.181541 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 01:55:53.185327 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 01:55:53.190574 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 01:55:53.196208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 01:55:53.204005 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 01:55:53.209727 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 01:55:53.214970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:53.216483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:55:53.225547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 01:55:53.235705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 01:55:53.248583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 01:55:53.250509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:55:53.250669 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:53.263573 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 01:55:53.267243 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 01:55:53.286823 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 01:55:53.288981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:55:53.290293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 01:55:53.292068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:55:53.293435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 01:55:53.302866 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:53.303138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:55:53.313526 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 01:55:53.319480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 01:55:53.320486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:55:53.320643 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:53.323402 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 01:55:53.324840 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:55:53.326427 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 01:55:53.344259 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 01:55:53.361052 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Sep 13 01:55:53.364484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:53.365014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:55:53.376662 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 01:55:53.387638 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 01:55:53.388798 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:55:53.389439 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:55:53.389692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:55:53.393650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:55:53.393985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 01:55:53.402159 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 01:55:53.405260 systemd[1]: Finished ensure-sysext.service. Sep 13 01:55:53.425459 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 01:55:53.427564 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 01:55:53.429915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:55:53.430233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 01:55:53.432737 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:55:53.432967 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 01:55:53.437020 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:55:53.437287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 01:55:53.453418 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 01:55:53.455127 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:55:53.455301 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 01:55:53.472341 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 01:55:53.479653 augenrules[1441]: No rules Sep 13 01:55:53.482786 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 01:55:53.676173 systemd-networkd[1436]: lo: Link UP Sep 13 01:55:53.679270 systemd-networkd[1436]: lo: Gained carrier Sep 13 01:55:53.685667 systemd-networkd[1436]: Enumeration completed Sep 13 01:55:53.685852 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 01:55:53.686792 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:55:53.686806 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:55:53.694656 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 01:55:53.699030 systemd-networkd[1436]: eth0: Link UP Sep 13 01:55:53.699043 systemd-networkd[1436]: eth0: Gained carrier Sep 13 01:55:53.699097 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:55:53.707666 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:55:53.711158 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 01:55:53.718264 systemd-networkd[1436]: eth0: DHCPv4 address 10.230.52.214/30, gateway 10.230.52.213 acquired from 10.230.52.213 Sep 13 01:55:53.728289 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1440) Sep 13 01:55:53.746361 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 01:55:53.748126 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 01:55:53.760096 systemd-resolved[1386]: Positive Trust Anchors: Sep 13 01:55:53.760143 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:55:53.760203 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 01:55:53.770977 systemd-resolved[1386]: Using system hostname 'srv-vx6h6.gb1.brightbox.com'. Sep 13 01:55:53.777947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 01:55:53.779033 systemd[1]: Reached target network.target - Network. Sep 13 01:55:53.780308 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 01:55:53.794220 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 01:55:53.797972 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 01:55:53.804425 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 01:55:53.820226 kernel: ACPI: button: Power Button [PWRF] Sep 13 01:55:53.830216 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:55:53.831880 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 01:55:53.874220 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 01:55:53.878223 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 01:55:53.887956 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 01:55:53.888293 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 01:55:53.981160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:55:54.138757 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 01:55:54.179487 systemd-timesyncd[1421]: Contacted time server 178.79.150.226:123 (0.flatcar.pool.ntp.org). Sep 13 01:55:54.179580 systemd-timesyncd[1421]: Initial clock synchronization to Sat 2025-09-13 01:55:54.159469 UTC. Sep 13 01:55:54.192551 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 01:55:54.194040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:55:54.213231 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:55:54.242678 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 01:55:54.243861 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 01:55:54.244658 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 01:55:54.245562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 01:55:54.246543 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 01:55:54.247636 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 01:55:54.248522 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 01:55:54.249303 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 01:55:54.250069 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:55:54.250124 systemd[1]: Reached target paths.target - Path Units. Sep 13 01:55:54.250826 systemd[1]: Reached target timers.target - Timer Units. Sep 13 01:55:54.254761 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 01:55:54.257295 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 01:55:54.263349 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 01:55:54.265906 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 01:55:54.267278 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 01:55:54.268086 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 01:55:54.268744 systemd[1]: Reached target basic.target - Basic System. Sep 13 01:55:54.269446 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 01:55:54.269502 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 01:55:54.272346 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 01:55:54.279106 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:55:54.282416 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 01:55:54.287375 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 01:55:54.294343 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 01:55:54.298073 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 01:55:54.300265 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 01:55:54.303410 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 01:55:54.312321 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 01:55:54.323090 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 01:55:54.327372 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 01:55:54.346657 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 01:55:54.349426 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:55:54.350910 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 01:55:54.360467 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 01:55:54.364355 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 01:55:54.368639 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 01:55:54.374686 jq[1484]: false Sep 13 01:55:54.376705 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:55:54.376981 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 01:55:54.392306 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:55:54.392589 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 01:55:54.398829 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:55:54.399089 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 01:55:54.416253 extend-filesystems[1485]: Found loop4 Sep 13 01:55:54.416253 extend-filesystems[1485]: Found loop5 Sep 13 01:55:54.416253 extend-filesystems[1485]: Found loop6 Sep 13 01:55:54.416253 extend-filesystems[1485]: Found loop7 Sep 13 01:55:54.416253 extend-filesystems[1485]: Found vda Sep 13 01:55:54.416253 extend-filesystems[1485]: Found vda1 Sep 13 01:55:54.416253 extend-filesystems[1485]: Found vda2 Sep 13 01:55:54.416111 dbus-daemon[1483]: [system] SELinux support is enabled Sep 13 01:55:54.445742 jq[1500]: true Sep 13 01:55:54.419945 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 01:55:54.446009 extend-filesystems[1485]: Found vda3 Sep 13 01:55:54.446009 extend-filesystems[1485]: Found usr Sep 13 01:55:54.446009 extend-filesystems[1485]: Found vda4 Sep 13 01:55:54.446009 extend-filesystems[1485]: Found vda6 Sep 13 01:55:54.446009 extend-filesystems[1485]: Found vda7 Sep 13 01:55:54.446009 extend-filesystems[1485]: Found vda9 Sep 13 01:55:54.446009 extend-filesystems[1485]: Checking size of /dev/vda9 Sep 13 01:55:54.425326 dbus-daemon[1483]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1436 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 01:55:54.436817 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:55:54.484143 update_engine[1499]: I20250913 01:55:54.455302 1499 main.cc:92] Flatcar Update Engine starting Sep 13 01:55:54.484143 update_engine[1499]: I20250913 01:55:54.460243 1499 update_check_scheduler.cc:74] Next update check in 11m38s Sep 13 01:55:54.439425 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 01:55:54.436861 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 01:55:54.486980 tar[1503]: linux-amd64/helm Sep 13 01:55:54.490377 extend-filesystems[1485]: Resized partition /dev/vda9 Sep 13 01:55:54.437677 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:55:54.493340 jq[1511]: true Sep 13 01:55:54.437708 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 01:55:54.455212 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 01:55:54.461913 systemd[1]: Started update-engine.service - Update Engine. Sep 13 01:55:54.468414 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 01:55:54.477651 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 01:55:54.499527 extend-filesystems[1524]: resize2fs 1.47.1 (20-May-2024) Sep 13 01:55:54.519164 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 13 01:55:54.524427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1423) Sep 13 01:55:54.603637 systemd-logind[1495]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 01:55:54.603687 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 01:55:54.603995 systemd-logind[1495]: New seat seat0. Sep 13 01:55:54.605724 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 01:55:54.748968 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:55:54.750810 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 01:55:54.763591 systemd[1]: Starting sshkeys.service... Sep 13 01:55:54.802973 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:55:54.805778 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 01:55:54.817604 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 01:55:54.886480 systemd-networkd[1436]: eth0: Gained IPv6LL Sep 13 01:55:54.898744 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 01:55:54.902775 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 01:55:54.918794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:55:54.921775 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 01:55:54.925209 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 01:55:54.927877 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 01:55:54.933361 dbus-daemon[1483]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1517 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 01:55:54.929902 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 01:55:54.943544 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 01:55:54.949341 extend-filesystems[1524]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 01:55:54.949341 extend-filesystems[1524]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 01:55:54.949341 extend-filesystems[1524]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 01:55:54.960362 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Sep 13 01:55:54.960689 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:55:54.960996 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 01:55:54.978464 polkitd[1559]: Started polkitd version 121 Sep 13 01:55:54.987601 polkitd[1559]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 01:55:54.987693 polkitd[1559]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 01:55:54.990485 polkitd[1559]: Finished loading, compiling and executing 2 rules Sep 13 01:55:54.995850 dbus-daemon[1483]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 01:55:54.996081 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 01:55:54.999041 polkitd[1559]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 01:55:55.036113 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 01:55:55.041441 systemd-hostnamed[1517]: Hostname set to (static) Sep 13 01:55:55.139263 containerd[1514]: time="2025-09-13T01:55:55.137312228Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 01:55:55.209603 containerd[1514]: time="2025-09-13T01:55:55.208579485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.214894537Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.214943786Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.214968403Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215244174Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215286196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215389643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215411094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215658132Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215681721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215701234Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216066 containerd[1514]: time="2025-09-13T01:55:55.215718576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:55:55.216551 containerd[1514]: time="2025-09-13T01:55:55.215860929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:55:55.217917 containerd[1514]: time="2025-09-13T01:55:55.217886137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:55:55.219675 containerd[1514]: time="2025-09-13T01:55:55.218253279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:55:55.219675 containerd[1514]: time="2025-09-13T01:55:55.218285312Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:55:55.219675 containerd[1514]: time="2025-09-13T01:55:55.218459396Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:55:55.219675 containerd[1514]: time="2025-09-13T01:55:55.218561232Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:55:55.226482 containerd[1514]: time="2025-09-13T01:55:55.226020219Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:55:55.226482 containerd[1514]: time="2025-09-13T01:55:55.226103673Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:55:55.226482 containerd[1514]: time="2025-09-13T01:55:55.226133663Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 01:55:55.226482 containerd[1514]: time="2025-09-13T01:55:55.226164846Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 01:55:55.226482 containerd[1514]: time="2025-09-13T01:55:55.226216316Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:55:55.226482 containerd[1514]: time="2025-09-13T01:55:55.226401023Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:55:55.230283 containerd[1514]: time="2025-09-13T01:55:55.230252906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:55:55.230646 containerd[1514]: time="2025-09-13T01:55:55.230617321Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 01:55:55.232237 containerd[1514]: time="2025-09-13T01:55:55.232208005Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 01:55:55.232373 containerd[1514]: time="2025-09-13T01:55:55.232347142Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 01:55:55.232541 containerd[1514]: time="2025-09-13T01:55:55.232514402Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.232668 containerd[1514]: time="2025-09-13T01:55:55.232642331Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.232793 containerd[1514]: time="2025-09-13T01:55:55.232765292Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.232918 containerd[1514]: time="2025-09-13T01:55:55.232889566Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.232988969Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233032711Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233062995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233088676Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233152080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233208860Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233233749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233260554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233285059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233309543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233344062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233370928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233395387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234210 containerd[1514]: time="2025-09-13T01:55:55.233426641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233447273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233473021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233508701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233538542Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233599757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233629132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233653652Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233738772Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233786262Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233811759Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233836043Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233856305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233881065Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 01:55:55.234719 containerd[1514]: time="2025-09-13T01:55:55.233906875Z" level=info msg="NRI interface is disabled by configuration." Sep 13 01:55:55.235124 containerd[1514]: time="2025-09-13T01:55:55.233930666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:55:55.238217 containerd[1514]: time="2025-09-13T01:55:55.236360260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:55:55.238217 containerd[1514]: time="2025-09-13T01:55:55.236451763Z" level=info msg="Connect containerd service" Sep 13 01:55:55.238768 containerd[1514]: time="2025-09-13T01:55:55.238723953Z" level=info msg="using legacy CRI server" Sep 13 01:55:55.239719 containerd[1514]: time="2025-09-13T01:55:55.239679107Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 01:55:55.241211 containerd[1514]: time="2025-09-13T01:55:55.239960505Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:55:55.241211 containerd[1514]: time="2025-09-13T01:55:55.240919484Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:55:55.241614 containerd[1514]: time="2025-09-13T01:55:55.241587398Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:55:55.241755 containerd[1514]: time="2025-09-13T01:55:55.241686483Z" level=info msg="Start subscribing containerd event" Sep 13 01:55:55.241804 containerd[1514]: time="2025-09-13T01:55:55.241778061Z" level=info msg="Start recovering state" Sep 13 01:55:55.241896 containerd[1514]: time="2025-09-13T01:55:55.241871639Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:55:55.242010 containerd[1514]: time="2025-09-13T01:55:55.241930192Z" level=info msg="Start event monitor" Sep 13 01:55:55.242119 containerd[1514]: time="2025-09-13T01:55:55.242097802Z" level=info msg="Start snapshots syncer" Sep 13 01:55:55.242256 containerd[1514]: time="2025-09-13T01:55:55.242231074Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:55:55.242339 containerd[1514]: time="2025-09-13T01:55:55.242318695Z" level=info msg="Start streaming server" Sep 13 01:55:55.242551 containerd[1514]: time="2025-09-13T01:55:55.242528477Z" level=info msg="containerd successfully booted in 0.109776s" Sep 13 01:55:55.242657 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 01:55:55.323087 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 01:55:55.542950 systemd-networkd[1436]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8d35:24:19ff:fee6:34d6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8d35:24:19ff:fee6:34d6/64 assigned by NDisc. Sep 13 01:55:55.543131 systemd-networkd[1436]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 13 01:55:55.615458 tar[1503]: linux-amd64/LICENSE Sep 13 01:55:55.616135 tar[1503]: linux-amd64/README.md Sep 13 01:55:55.634669 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 01:55:55.736765 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:55:55.774282 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 01:55:55.785864 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 01:55:55.790628 systemd[1]: Started sshd@0-10.230.52.214:22-139.178.68.195:49164.service - OpenSSH per-connection server daemon (139.178.68.195:49164). Sep 13 01:55:55.795915 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:55:55.796381 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 01:55:55.810320 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 01:55:55.824354 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 01:55:55.835544 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 01:55:55.842804 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 01:55:55.844083 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 01:55:56.247513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:55:56.263666 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:55:56.712401 sshd[1597]: Accepted publickey for core from 139.178.68.195 port 49164 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:55:56.715523 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:55:56.733957 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 01:55:56.744747 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 01:55:56.755965 systemd-logind[1495]: New session 1 of user core. Sep 13 01:55:56.768235 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 01:55:56.778723 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 01:55:56.797159 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:55:56.870633 kubelet[1612]: E0913 01:55:56.870435 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:55:56.874977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:55:56.875223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:55:56.875678 systemd[1]: kubelet.service: Consumed 1.055s CPU time. Sep 13 01:55:56.942598 systemd[1619]: Queued start job for default target default.target. Sep 13 01:55:56.954012 systemd[1619]: Created slice app.slice - User Application Slice. Sep 13 01:55:56.954206 systemd[1619]: Reached target paths.target - Paths. Sep 13 01:55:56.954354 systemd[1619]: Reached target timers.target - Timers. Sep 13 01:55:56.956487 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 01:55:56.971478 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 01:55:56.971666 systemd[1619]: Reached target sockets.target - Sockets. Sep 13 01:55:56.971691 systemd[1619]: Reached target basic.target - Basic System. Sep 13 01:55:56.971774 systemd[1619]: Reached target default.target - Main User Target. Sep 13 01:55:56.971847 systemd[1619]: Startup finished in 161ms. Sep 13 01:55:56.971921 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 01:55:56.993621 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 01:55:57.694618 systemd[1]: Started sshd@1-10.230.52.214:22-139.178.68.195:49170.service - OpenSSH per-connection server daemon (139.178.68.195:49170). Sep 13 01:55:58.592570 sshd[1633]: Accepted publickey for core from 139.178.68.195 port 49170 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:55:58.594656 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:55:58.600438 systemd-logind[1495]: New session 2 of user core. Sep 13 01:55:58.612524 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 01:55:59.215443 sshd[1633]: pam_unix(sshd:session): session closed for user core Sep 13 01:55:59.220273 systemd[1]: sshd@1-10.230.52.214:22-139.178.68.195:49170.service: Deactivated successfully. Sep 13 01:55:59.222577 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 01:55:59.223636 systemd-logind[1495]: Session 2 logged out. Waiting for processes to exit. Sep 13 01:55:59.225104 systemd-logind[1495]: Removed session 2. Sep 13 01:55:59.369939 systemd[1]: Started sshd@2-10.230.52.214:22-139.178.68.195:49186.service - OpenSSH per-connection server daemon (139.178.68.195:49186). Sep 13 01:56:00.273232 sshd[1641]: Accepted publickey for core from 139.178.68.195 port 49186 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:56:00.275307 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:56:00.282537 systemd-logind[1495]: New session 3 of user core. Sep 13 01:56:00.290447 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 01:56:00.901415 login[1605]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:56:00.901527 sshd[1641]: pam_unix(sshd:session): session closed for user core Sep 13 01:56:00.905581 login[1603]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:56:00.907506 systemd[1]: sshd@2-10.230.52.214:22-139.178.68.195:49186.service: Deactivated successfully. Sep 13 01:56:00.910101 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 01:56:00.911221 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. Sep 13 01:56:00.916137 systemd-logind[1495]: New session 4 of user core. Sep 13 01:56:00.922528 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 01:56:00.924097 systemd-logind[1495]: Removed session 3. Sep 13 01:56:00.928490 systemd-logind[1495]: New session 5 of user core. Sep 13 01:56:00.933462 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 01:56:01.465698 coreos-metadata[1482]: Sep 13 01:56:01.465 WARN failed to locate config-drive, using the metadata service API instead Sep 13 01:56:01.490940 coreos-metadata[1482]: Sep 13 01:56:01.490 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 13 01:56:01.496991 coreos-metadata[1482]: Sep 13 01:56:01.496 INFO Fetch failed with 404: resource not found Sep 13 01:56:01.497218 coreos-metadata[1482]: Sep 13 01:56:01.497 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 13 01:56:01.498033 coreos-metadata[1482]: Sep 13 01:56:01.497 INFO Fetch successful Sep 13 01:56:01.498217 coreos-metadata[1482]: Sep 13 01:56:01.498 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 13 01:56:01.510022 coreos-metadata[1482]: Sep 13 01:56:01.509 INFO Fetch successful Sep 13 01:56:01.510108 coreos-metadata[1482]: Sep 13 01:56:01.510 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 13 01:56:01.524888 coreos-metadata[1482]: Sep 13 01:56:01.524 INFO Fetch successful Sep 13 01:56:01.525116 coreos-metadata[1482]: Sep 13 01:56:01.524 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 13 01:56:01.538355 coreos-metadata[1482]: Sep 13 01:56:01.538 INFO Fetch successful Sep 13 01:56:01.538419 coreos-metadata[1482]: Sep 13 01:56:01.538 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 13 01:56:01.555003 coreos-metadata[1482]: Sep 13 01:56:01.554 INFO Fetch successful Sep 13 01:56:01.582369 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 01:56:01.584034 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 01:56:01.957152 coreos-metadata[1553]: Sep 13 01:56:01.957 WARN failed to locate config-drive, using the metadata service API instead Sep 13 01:56:01.978075 coreos-metadata[1553]: Sep 13 01:56:01.978 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 13 01:56:01.998644 coreos-metadata[1553]: Sep 13 01:56:01.998 INFO Fetch successful Sep 13 01:56:01.998770 coreos-metadata[1553]: Sep 13 01:56:01.998 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 01:56:02.025632 coreos-metadata[1553]: Sep 13 01:56:02.025 INFO Fetch successful Sep 13 01:56:02.027462 unknown[1553]: wrote ssh authorized keys file for user: core Sep 13 01:56:02.045473 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:56:02.046493 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 01:56:02.048826 systemd[1]: Finished sshkeys.service. Sep 13 01:56:02.051751 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 01:56:02.054282 systemd[1]: Startup finished in 1.291s (kernel) + 13.699s (initrd) + 11.662s (userspace) = 26.654s. Sep 13 01:56:07.036859 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:56:07.045527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:07.216432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:07.229682 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:56:07.326021 kubelet[1694]: E0913 01:56:07.325765 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:56:07.330519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:56:07.330757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:56:11.058578 systemd[1]: Started sshd@3-10.230.52.214:22-139.178.68.195:47470.service - OpenSSH per-connection server daemon (139.178.68.195:47470). Sep 13 01:56:11.942896 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 47470 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:56:11.945084 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:56:11.952458 systemd-logind[1495]: New session 6 of user core. Sep 13 01:56:11.960390 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 01:56:12.560161 sshd[1702]: pam_unix(sshd:session): session closed for user core Sep 13 01:56:12.565066 systemd[1]: sshd@3-10.230.52.214:22-139.178.68.195:47470.service: Deactivated successfully. Sep 13 01:56:12.567003 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:56:12.567950 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:56:12.569524 systemd-logind[1495]: Removed session 6. Sep 13 01:56:12.710909 systemd[1]: Started sshd@4-10.230.52.214:22-139.178.68.195:47478.service - OpenSSH per-connection server daemon (139.178.68.195:47478). Sep 13 01:56:13.594755 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 47478 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:56:13.596889 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:56:13.603426 systemd-logind[1495]: New session 7 of user core. Sep 13 01:56:13.614450 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 01:56:14.204097 sshd[1709]: pam_unix(sshd:session): session closed for user core Sep 13 01:56:14.208316 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:56:14.208663 systemd[1]: sshd@4-10.230.52.214:22-139.178.68.195:47478.service: Deactivated successfully. Sep 13 01:56:14.210633 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:56:14.212452 systemd-logind[1495]: Removed session 7. Sep 13 01:56:14.364629 systemd[1]: Started sshd@5-10.230.52.214:22-139.178.68.195:47482.service - OpenSSH per-connection server daemon (139.178.68.195:47482). Sep 13 01:56:15.258400 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 47482 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:56:15.260689 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:56:15.268903 systemd-logind[1495]: New session 8 of user core. Sep 13 01:56:15.280407 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 01:56:15.881133 sshd[1716]: pam_unix(sshd:session): session closed for user core Sep 13 01:56:15.885367 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:56:15.886023 systemd[1]: sshd@5-10.230.52.214:22-139.178.68.195:47482.service: Deactivated successfully. Sep 13 01:56:15.888494 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:56:15.890791 systemd-logind[1495]: Removed session 8. Sep 13 01:56:16.050626 systemd[1]: Started sshd@6-10.230.52.214:22-139.178.68.195:47484.service - OpenSSH per-connection server daemon (139.178.68.195:47484). Sep 13 01:56:16.998129 sshd[1723]: Accepted publickey for core from 139.178.68.195 port 47484 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:56:17.000365 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:56:17.006738 systemd-logind[1495]: New session 9 of user core. Sep 13 01:56:17.017378 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 01:56:17.519260 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 01:56:17.519778 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:56:17.522014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:56:17.530453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:17.540793 sudo[1726]: pam_unix(sudo:session): session closed for user root Sep 13 01:56:17.688402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:17.696662 sshd[1723]: pam_unix(sshd:session): session closed for user core Sep 13 01:56:17.701617 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:56:17.702633 systemd[1]: sshd@6-10.230.52.214:22-139.178.68.195:47484.service: Deactivated successfully. Sep 13 01:56:17.705690 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:56:17.707684 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:56:17.710143 systemd-logind[1495]: Removed session 9. Sep 13 01:56:17.809357 kubelet[1736]: E0913 01:56:17.809202 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:56:17.813819 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:56:17.814119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:56:17.860550 systemd[1]: Started sshd@7-10.230.52.214:22-139.178.68.195:47494.service - OpenSSH per-connection server daemon (139.178.68.195:47494). Sep 13 01:56:18.767564 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 47494 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:56:18.771034 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:56:18.780217 systemd-logind[1495]: New session 10 of user core. Sep 13 01:56:18.788429 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 01:56:19.248792 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 01:56:19.249856 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:56:19.255425 sudo[1750]: pam_unix(sudo:session): session closed for user root Sep 13 01:56:19.263531 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 01:56:19.263986 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:56:19.284533 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 01:56:19.287967 auditctl[1753]: No rules Sep 13 01:56:19.288536 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 01:56:19.288798 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 01:56:19.292160 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 01:56:19.338832 augenrules[1771]: No rules Sep 13 01:56:19.339768 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 01:56:19.341097 sudo[1749]: pam_unix(sudo:session): session closed for user root Sep 13 01:56:19.485664 sshd[1746]: pam_unix(sshd:session): session closed for user core Sep 13 01:56:19.490839 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:56:19.491429 systemd[1]: sshd@7-10.230.52.214:22-139.178.68.195:47494.service: Deactivated successfully. Sep 13 01:56:19.493610 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:56:19.494958 systemd-logind[1495]: Removed session 10. Sep 13 01:56:19.654689 systemd[1]: Started sshd@8-10.230.52.214:22-139.178.68.195:47506.service - OpenSSH per-connection server daemon (139.178.68.195:47506). Sep 13 01:56:20.542262 sshd[1779]: Accepted publickey for core from 139.178.68.195 port 47506 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:56:20.544369 sshd[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:56:20.555959 systemd-logind[1495]: New session 11 of user core. Sep 13 01:56:20.565409 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 01:56:21.022387 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:56:21.022901 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:56:21.501051 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 01:56:21.501315 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 01:56:21.917526 dockerd[1799]: time="2025-09-13T01:56:21.917429241Z" level=info msg="Starting up" Sep 13 01:56:22.079659 dockerd[1799]: time="2025-09-13T01:56:22.079509610Z" level=info msg="Loading containers: start." Sep 13 01:56:22.222241 kernel: Initializing XFRM netlink socket Sep 13 01:56:22.332413 systemd-networkd[1436]: docker0: Link UP Sep 13 01:56:22.350223 dockerd[1799]: time="2025-09-13T01:56:22.349354303Z" level=info msg="Loading containers: done." Sep 13 01:56:22.371964 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck284544782-merged.mount: Deactivated successfully. Sep 13 01:56:22.375227 dockerd[1799]: time="2025-09-13T01:56:22.374481184Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:56:22.375227 dockerd[1799]: time="2025-09-13T01:56:22.374696337Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 01:56:22.375227 dockerd[1799]: time="2025-09-13T01:56:22.374868150Z" level=info msg="Daemon has completed initialization" Sep 13 01:56:22.418720 dockerd[1799]: time="2025-09-13T01:56:22.418570713Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:56:22.419564 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 01:56:23.503208 containerd[1514]: time="2025-09-13T01:56:23.503086914Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 01:56:24.430640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018230319.mount: Deactivated successfully. Sep 13 01:56:25.562302 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 01:56:26.647566 containerd[1514]: time="2025-09-13T01:56:26.647441012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:26.649956 containerd[1514]: time="2025-09-13T01:56:26.649894348Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117132" Sep 13 01:56:26.652210 containerd[1514]: time="2025-09-13T01:56:26.650559619Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:26.655290 containerd[1514]: time="2025-09-13T01:56:26.655252850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:26.657212 containerd[1514]: time="2025-09-13T01:56:26.657162010Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 3.153974116s" Sep 13 01:56:26.657421 containerd[1514]: time="2025-09-13T01:56:26.657381974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 01:56:26.662254 containerd[1514]: time="2025-09-13T01:56:26.662178014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 01:56:28.038240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:56:28.050567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:28.271546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:28.276694 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:56:28.378333 kubelet[2014]: E0913 01:56:28.378088 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:56:28.382049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:56:28.382320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:56:28.837676 containerd[1514]: time="2025-09-13T01:56:28.837540476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:28.839669 containerd[1514]: time="2025-09-13T01:56:28.839550142Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716640" Sep 13 01:56:28.844223 containerd[1514]: time="2025-09-13T01:56:28.842676427Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:28.848315 containerd[1514]: time="2025-09-13T01:56:28.848254119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:28.849946 containerd[1514]: time="2025-09-13T01:56:28.849900513Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 2.187640156s" Sep 13 01:56:28.850077 containerd[1514]: time="2025-09-13T01:56:28.850051056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 01:56:28.852133 containerd[1514]: time="2025-09-13T01:56:28.852095449Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 01:56:30.439257 containerd[1514]: time="2025-09-13T01:56:30.437825478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:30.439257 containerd[1514]: time="2025-09-13T01:56:30.439486786Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787706" Sep 13 01:56:30.443356 containerd[1514]: time="2025-09-13T01:56:30.443295715Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:30.447104 containerd[1514]: time="2025-09-13T01:56:30.447050667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:30.451055 containerd[1514]: time="2025-09-13T01:56:30.450330621Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.598187398s" Sep 13 01:56:30.451055 containerd[1514]: time="2025-09-13T01:56:30.450400621Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 01:56:30.451612 containerd[1514]: time="2025-09-13T01:56:30.451263991Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 01:56:31.973702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604217474.mount: Deactivated successfully. Sep 13 01:56:32.669462 containerd[1514]: time="2025-09-13T01:56:32.669351704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:32.670820 containerd[1514]: time="2025-09-13T01:56:32.670614683Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410260" Sep 13 01:56:32.671888 containerd[1514]: time="2025-09-13T01:56:32.671592468Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:32.674209 containerd[1514]: time="2025-09-13T01:56:32.674151687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:32.675440 containerd[1514]: time="2025-09-13T01:56:32.675399000Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.22408579s" Sep 13 01:56:32.675609 containerd[1514]: time="2025-09-13T01:56:32.675581034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 01:56:32.677523 containerd[1514]: time="2025-09-13T01:56:32.677493616Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 01:56:33.287687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643672417.mount: Deactivated successfully. Sep 13 01:56:34.454238 containerd[1514]: time="2025-09-13T01:56:34.453943857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:34.455691 containerd[1514]: time="2025-09-13T01:56:34.455623242Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 13 01:56:34.456894 containerd[1514]: time="2025-09-13T01:56:34.456325744Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:34.462097 containerd[1514]: time="2025-09-13T01:56:34.462057425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:34.464571 containerd[1514]: time="2025-09-13T01:56:34.464530722Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.786981965s" Sep 13 01:56:34.464643 containerd[1514]: time="2025-09-13T01:56:34.464576153Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 01:56:34.466198 containerd[1514]: time="2025-09-13T01:56:34.466140403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:56:35.044467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856283234.mount: Deactivated successfully. Sep 13 01:56:35.051060 containerd[1514]: time="2025-09-13T01:56:35.050992776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:35.052828 containerd[1514]: time="2025-09-13T01:56:35.052754048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 13 01:56:35.053823 containerd[1514]: time="2025-09-13T01:56:35.053777501Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:35.056694 containerd[1514]: time="2025-09-13T01:56:35.056636908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:35.057915 containerd[1514]: time="2025-09-13T01:56:35.057762001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.585126ms" Sep 13 01:56:35.057915 containerd[1514]: time="2025-09-13T01:56:35.057804242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 01:56:35.059236 containerd[1514]: time="2025-09-13T01:56:35.059208595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 01:56:35.737602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3571754656.mount: Deactivated successfully. Sep 13 01:56:38.538800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 13 01:56:38.548886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:38.654167 containerd[1514]: time="2025-09-13T01:56:38.654017220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:38.656772 containerd[1514]: time="2025-09-13T01:56:38.656453153Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 13 01:56:38.659580 containerd[1514]: time="2025-09-13T01:56:38.659260050Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:38.923216 containerd[1514]: time="2025-09-13T01:56:38.920867488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:56:38.926459 containerd[1514]: time="2025-09-13T01:56:38.926398554Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.867133401s" Sep 13 01:56:38.926628 containerd[1514]: time="2025-09-13T01:56:38.926601160Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 01:56:38.929851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:38.935783 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:56:39.188277 kubelet[2153]: E0913 01:56:39.187983 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:56:39.190876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:56:39.191133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:56:39.415003 update_engine[1499]: I20250913 01:56:39.413519 1499 update_attempter.cc:509] Updating boot flags... Sep 13 01:56:39.461229 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2172) Sep 13 01:56:39.528865 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2172) Sep 13 01:56:43.433253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:43.442533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:43.487439 systemd[1]: Reloading requested from client PID 2198 ('systemctl') (unit session-11.scope)... Sep 13 01:56:43.487492 systemd[1]: Reloading... Sep 13 01:56:43.689037 zram_generator::config[2235]: No configuration found. Sep 13 01:56:43.842548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:56:43.950548 systemd[1]: Reloading finished in 462 ms. Sep 13 01:56:44.025617 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 01:56:44.026255 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 01:56:44.026825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:44.038741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:44.326780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:44.342727 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 01:56:44.411376 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:56:44.413215 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:56:44.413215 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:56:44.413215 kubelet[2305]: I0913 01:56:44.412103 2305 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:56:45.141219 kubelet[2305]: I0913 01:56:45.141116 2305 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:56:45.141219 kubelet[2305]: I0913 01:56:45.141159 2305 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:56:45.142218 kubelet[2305]: I0913 01:56:45.141748 2305 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:56:45.168612 kubelet[2305]: I0913 01:56:45.168565 2305 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:56:45.173755 kubelet[2305]: E0913 01:56:45.173438 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.52.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:45.186546 kubelet[2305]: E0913 01:56:45.186345 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:56:45.186546 kubelet[2305]: I0913 01:56:45.186401 2305 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:56:45.195267 kubelet[2305]: I0913 01:56:45.195236 2305 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:56:45.198248 kubelet[2305]: I0913 01:56:45.197349 2305 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:56:45.198248 kubelet[2305]: I0913 01:56:45.197636 2305 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:56:45.198248 kubelet[2305]: I0913 01:56:45.197691 2305 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-vx6h6.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:56:45.198248 kubelet[2305]: I0913 01:56:45.197975 2305 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:56:45.198656 kubelet[2305]: I0913 01:56:45.197992 2305 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:56:45.198656 kubelet[2305]: I0913 01:56:45.198267 2305 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:56:45.201582 kubelet[2305]: I0913 01:56:45.201025 2305 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:56:45.201582 kubelet[2305]: I0913 01:56:45.201063 2305 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:56:45.201582 kubelet[2305]: I0913 01:56:45.201124 2305 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:56:45.201582 kubelet[2305]: I0913 01:56:45.201225 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:56:45.205064 kubelet[2305]: W0913 01:56:45.205002 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.52.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vx6h6.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:45.205690 kubelet[2305]: E0913 01:56:45.205648 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.52.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vx6h6.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:45.206115 kubelet[2305]: I0913 01:56:45.205928 2305 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 01:56:45.209143 kubelet[2305]: I0913 01:56:45.209119 2305 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:56:45.210378 kubelet[2305]: W0913 01:56:45.209928 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:56:45.210378 kubelet[2305]: W0913 01:56:45.210179 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.52.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:45.210378 kubelet[2305]: E0913 01:56:45.210260 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.52.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:45.211421 kubelet[2305]: I0913 01:56:45.211390 2305 server.go:1274] "Started kubelet" Sep 13 01:56:45.213293 kubelet[2305]: I0913 01:56:45.213271 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:56:45.219240 kubelet[2305]: E0913 01:56:45.215051 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.52.214:6443/api/v1/namespaces/default/events\": dial tcp 10.230.52.214:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-vx6h6.gb1.brightbox.com.1864b4da852a76db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-vx6h6.gb1.brightbox.com,UID:srv-vx6h6.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-vx6h6.gb1.brightbox.com,},FirstTimestamp:2025-09-13 01:56:45.211358939 +0000 UTC m=+0.861405100,LastTimestamp:2025-09-13 01:56:45.211358939 +0000 UTC m=+0.861405100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-vx6h6.gb1.brightbox.com,}" Sep 13 01:56:45.220632 kubelet[2305]: I0913 01:56:45.220293 2305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:56:45.223101 kubelet[2305]: I0913 01:56:45.222460 2305 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:56:45.227357 kubelet[2305]: I0913 01:56:45.227311 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:56:45.227660 kubelet[2305]: I0913 01:56:45.227633 2305 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:56:45.228184 kubelet[2305]: I0913 01:56:45.227924 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:56:45.230018 kubelet[2305]: I0913 01:56:45.229995 2305 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:56:45.230419 kubelet[2305]: E0913 01:56:45.230383 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-vx6h6.gb1.brightbox.com\" not found" Sep 13 01:56:45.230953 kubelet[2305]: I0913 01:56:45.230931 2305 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:56:45.231157 kubelet[2305]: I0913 01:56:45.231138 2305 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:56:45.232661 kubelet[2305]: W0913 01:56:45.232615 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.52.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:45.232853 kubelet[2305]: E0913 01:56:45.232825 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.52.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:45.233593 kubelet[2305]: E0913 01:56:45.233559 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vx6h6.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.214:6443: connect: connection refused" interval="200ms" Sep 13 01:56:45.234016 kubelet[2305]: I0913 01:56:45.233992 2305 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:56:45.235517 kubelet[2305]: I0913 01:56:45.235490 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:56:45.237231 kubelet[2305]: I0913 01:56:45.236878 2305 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:56:45.248845 kubelet[2305]: E0913 01:56:45.248799 2305 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:56:45.272739 kubelet[2305]: I0913 01:56:45.272691 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:56:45.279053 kubelet[2305]: I0913 01:56:45.277287 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:56:45.279053 kubelet[2305]: I0913 01:56:45.277336 2305 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:56:45.279053 kubelet[2305]: I0913 01:56:45.277379 2305 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:56:45.279053 kubelet[2305]: E0913 01:56:45.277451 2305 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:56:45.279649 kubelet[2305]: W0913 01:56:45.279602 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.52.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:45.279761 kubelet[2305]: E0913 01:56:45.279657 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.52.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:45.288410 kubelet[2305]: I0913 01:56:45.288364 2305 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:56:45.288410 kubelet[2305]: I0913 01:56:45.288405 2305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:56:45.288561 kubelet[2305]: I0913 01:56:45.288435 2305 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:56:45.290111 kubelet[2305]: I0913 01:56:45.290081 2305 policy_none.go:49] "None policy: Start" Sep 13 01:56:45.291080 kubelet[2305]: I0913 01:56:45.291060 2305 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:56:45.291151 kubelet[2305]: I0913 01:56:45.291087 2305 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:56:45.300913 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 01:56:45.311602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 01:56:45.315989 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 01:56:45.327786 kubelet[2305]: I0913 01:56:45.327522 2305 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:56:45.327915 kubelet[2305]: I0913 01:56:45.327792 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:56:45.327915 kubelet[2305]: I0913 01:56:45.327819 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:56:45.328289 kubelet[2305]: I0913 01:56:45.328265 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:56:45.331388 kubelet[2305]: E0913 01:56:45.331336 2305 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-vx6h6.gb1.brightbox.com\" not found" Sep 13 01:56:45.396363 systemd[1]: Created slice kubepods-burstable-podc983ebc85d6bfc9e75e5edb766fa3990.slice - libcontainer container kubepods-burstable-podc983ebc85d6bfc9e75e5edb766fa3990.slice. Sep 13 01:56:45.415332 systemd[1]: Created slice kubepods-burstable-pod4f8cab408fedbfb523da1186cc722555.slice - libcontainer container kubepods-burstable-pod4f8cab408fedbfb523da1186cc722555.slice. Sep 13 01:56:45.422428 systemd[1]: Created slice kubepods-burstable-pod3459550eb18f777f5979fd06a540246a.slice - libcontainer container kubepods-burstable-pod3459550eb18f777f5979fd06a540246a.slice. Sep 13 01:56:45.432094 kubelet[2305]: I0913 01:56:45.431870 2305 kubelet_node_status.go:72] "Attempting to register node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.433252 kubelet[2305]: E0913 01:56:45.433151 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.52.214:6443/api/v1/nodes\": dial tcp 10.230.52.214:6443: connect: connection refused" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.434680 kubelet[2305]: E0913 01:56:45.434635 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vx6h6.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.214:6443: connect: connection refused" interval="400ms" Sep 13 01:56:45.532390 kubelet[2305]: I0913 01:56:45.532217 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-flexvolume-dir\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532390 kubelet[2305]: I0913 01:56:45.532296 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-kubeconfig\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532390 kubelet[2305]: I0913 01:56:45.532406 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c983ebc85d6bfc9e75e5edb766fa3990-kubeconfig\") pod \"kube-scheduler-srv-vx6h6.gb1.brightbox.com\" (UID: \"c983ebc85d6bfc9e75e5edb766fa3990\") " pod="kube-system/kube-scheduler-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532697 kubelet[2305]: I0913 01:56:45.532502 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8cab408fedbfb523da1186cc722555-usr-share-ca-certificates\") pod \"kube-apiserver-srv-vx6h6.gb1.brightbox.com\" (UID: \"4f8cab408fedbfb523da1186cc722555\") " pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532697 kubelet[2305]: I0913 01:56:45.532536 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-ca-certs\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532697 kubelet[2305]: I0913 01:56:45.532577 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-k8s-certs\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532697 kubelet[2305]: I0913 01:56:45.532609 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532697 kubelet[2305]: I0913 01:56:45.532653 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8cab408fedbfb523da1186cc722555-ca-certs\") pod \"kube-apiserver-srv-vx6h6.gb1.brightbox.com\" (UID: \"4f8cab408fedbfb523da1186cc722555\") " pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.532908 kubelet[2305]: I0913 01:56:45.532684 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8cab408fedbfb523da1186cc722555-k8s-certs\") pod \"kube-apiserver-srv-vx6h6.gb1.brightbox.com\" (UID: \"4f8cab408fedbfb523da1186cc722555\") " pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.636210 kubelet[2305]: I0913 01:56:45.636139 2305 kubelet_node_status.go:72] "Attempting to register node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.636874 kubelet[2305]: E0913 01:56:45.636843 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.52.214:6443/api/v1/nodes\": dial tcp 10.230.52.214:6443: connect: connection refused" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:45.716110 containerd[1514]: time="2025-09-13T01:56:45.715906235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-vx6h6.gb1.brightbox.com,Uid:c983ebc85d6bfc9e75e5edb766fa3990,Namespace:kube-system,Attempt:0,}" Sep 13 01:56:45.728779 containerd[1514]: time="2025-09-13T01:56:45.728716979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-vx6h6.gb1.brightbox.com,Uid:4f8cab408fedbfb523da1186cc722555,Namespace:kube-system,Attempt:0,}" Sep 13 01:56:45.729292 containerd[1514]: time="2025-09-13T01:56:45.729250131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-vx6h6.gb1.brightbox.com,Uid:3459550eb18f777f5979fd06a540246a,Namespace:kube-system,Attempt:0,}" Sep 13 01:56:45.835782 kubelet[2305]: E0913 01:56:45.835629 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vx6h6.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.214:6443: connect: connection refused" interval="800ms" Sep 13 01:56:46.041113 kubelet[2305]: I0913 01:56:46.041062 2305 kubelet_node_status.go:72] "Attempting to register node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:46.041805 kubelet[2305]: E0913 01:56:46.041755 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.52.214:6443/api/v1/nodes\": dial tcp 10.230.52.214:6443: connect: connection refused" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:46.229020 kubelet[2305]: W0913 01:56:46.228910 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.52.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:46.229020 kubelet[2305]: E0913 01:56:46.228969 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.52.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:46.322125 kubelet[2305]: W0913 01:56:46.321788 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.52.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:46.322125 kubelet[2305]: E0913 01:56:46.321894 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.52.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:46.346659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028142232.mount: Deactivated successfully. Sep 13 01:56:46.353478 containerd[1514]: time="2025-09-13T01:56:46.353394012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:56:46.354727 containerd[1514]: time="2025-09-13T01:56:46.354692834Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:56:46.355765 containerd[1514]: time="2025-09-13T01:56:46.355673826Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 01:56:46.356979 containerd[1514]: time="2025-09-13T01:56:46.356755583Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:56:46.358762 containerd[1514]: time="2025-09-13T01:56:46.358559380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 01:56:46.358762 containerd[1514]: time="2025-09-13T01:56:46.358648730Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:56:46.358762 containerd[1514]: time="2025-09-13T01:56:46.358719908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 13 01:56:46.361645 containerd[1514]: time="2025-09-13T01:56:46.361612506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:56:46.364810 containerd[1514]: time="2025-09-13T01:56:46.363996201Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.679189ms" Sep 13 01:56:46.368212 containerd[1514]: time="2025-09-13T01:56:46.367307402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.197324ms" Sep 13 01:56:46.371068 containerd[1514]: time="2025-09-13T01:56:46.371028021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.191887ms" Sep 13 01:56:46.578927 containerd[1514]: time="2025-09-13T01:56:46.578510133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:56:46.580312 containerd[1514]: time="2025-09-13T01:56:46.578609310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:56:46.580732 containerd[1514]: time="2025-09-13T01:56:46.580671514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:46.580996 containerd[1514]: time="2025-09-13T01:56:46.580862900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:46.585411 containerd[1514]: time="2025-09-13T01:56:46.585319106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:56:46.585965 containerd[1514]: time="2025-09-13T01:56:46.585648124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:56:46.585965 containerd[1514]: time="2025-09-13T01:56:46.585682081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:46.586609 containerd[1514]: time="2025-09-13T01:56:46.586406019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:46.590572 containerd[1514]: time="2025-09-13T01:56:46.590458037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:56:46.590789 containerd[1514]: time="2025-09-13T01:56:46.590519531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:56:46.591034 containerd[1514]: time="2025-09-13T01:56:46.590890214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:46.593292 containerd[1514]: time="2025-09-13T01:56:46.592544255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:46.625710 systemd[1]: Started cri-containerd-8fa62aaf75ba0f37fa044464e1da573adcca3acdd69a6e246a0bd2c59f697e27.scope - libcontainer container 8fa62aaf75ba0f37fa044464e1da573adcca3acdd69a6e246a0bd2c59f697e27. Sep 13 01:56:46.637406 kubelet[2305]: E0913 01:56:46.636416 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.52.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vx6h6.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.52.214:6443: connect: connection refused" interval="1.6s" Sep 13 01:56:46.647289 systemd[1]: Started cri-containerd-6734d219d3ac5731c86e55d86b0010d35c15dade5640cbe19db6b81cd5b1c1aa.scope - libcontainer container 6734d219d3ac5731c86e55d86b0010d35c15dade5640cbe19db6b81cd5b1c1aa. Sep 13 01:56:46.648911 kubelet[2305]: W0913 01:56:46.647791 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.52.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:46.648911 kubelet[2305]: E0913 01:56:46.647896 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.52.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:46.668536 systemd[1]: Started cri-containerd-0089a0e7ba37de1c7aaec69faa1ab9f9cd4ee3fb9407aeba5ea2ed129f745214.scope - libcontainer container 0089a0e7ba37de1c7aaec69faa1ab9f9cd4ee3fb9407aeba5ea2ed129f745214. Sep 13 01:56:46.740508 containerd[1514]: time="2025-09-13T01:56:46.740266762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-vx6h6.gb1.brightbox.com,Uid:4f8cab408fedbfb523da1186cc722555,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fa62aaf75ba0f37fa044464e1da573adcca3acdd69a6e246a0bd2c59f697e27\"" Sep 13 01:56:46.759081 containerd[1514]: time="2025-09-13T01:56:46.758380783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-vx6h6.gb1.brightbox.com,Uid:c983ebc85d6bfc9e75e5edb766fa3990,Namespace:kube-system,Attempt:0,} returns sandbox id \"6734d219d3ac5731c86e55d86b0010d35c15dade5640cbe19db6b81cd5b1c1aa\"" Sep 13 01:56:46.762152 containerd[1514]: time="2025-09-13T01:56:46.760774057Z" level=info msg="CreateContainer within sandbox \"8fa62aaf75ba0f37fa044464e1da573adcca3acdd69a6e246a0bd2c59f697e27\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:56:46.763839 containerd[1514]: time="2025-09-13T01:56:46.763797425Z" level=info msg="CreateContainer within sandbox \"6734d219d3ac5731c86e55d86b0010d35c15dade5640cbe19db6b81cd5b1c1aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:56:46.788377 kubelet[2305]: W0913 01:56:46.788097 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.52.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vx6h6.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.52.214:6443: connect: connection refused Sep 13 01:56:46.788377 kubelet[2305]: E0913 01:56:46.788208 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.52.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vx6h6.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:46.797164 containerd[1514]: time="2025-09-13T01:56:46.796406492Z" level=info msg="CreateContainer within sandbox \"8fa62aaf75ba0f37fa044464e1da573adcca3acdd69a6e246a0bd2c59f697e27\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"196cda9aec45c3db5f800ae725fe0967b5845b1102277354a0667bd79b2dd754\"" Sep 13 01:56:46.798214 containerd[1514]: time="2025-09-13T01:56:46.797486820Z" level=info msg="StartContainer for \"196cda9aec45c3db5f800ae725fe0967b5845b1102277354a0667bd79b2dd754\"" Sep 13 01:56:46.813419 containerd[1514]: time="2025-09-13T01:56:46.813369894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-vx6h6.gb1.brightbox.com,Uid:3459550eb18f777f5979fd06a540246a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0089a0e7ba37de1c7aaec69faa1ab9f9cd4ee3fb9407aeba5ea2ed129f745214\"" Sep 13 01:56:46.817954 containerd[1514]: time="2025-09-13T01:56:46.817915504Z" level=info msg="CreateContainer within sandbox \"0089a0e7ba37de1c7aaec69faa1ab9f9cd4ee3fb9407aeba5ea2ed129f745214\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:56:46.823414 containerd[1514]: time="2025-09-13T01:56:46.823375735Z" level=info msg="CreateContainer within sandbox \"6734d219d3ac5731c86e55d86b0010d35c15dade5640cbe19db6b81cd5b1c1aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"09bc6df24b4d94bba691cbebc0ed3de39e3a80e4deb9e6ba4d9ff241a9eba1e3\"" Sep 13 01:56:46.824144 containerd[1514]: time="2025-09-13T01:56:46.824116339Z" level=info msg="StartContainer for \"09bc6df24b4d94bba691cbebc0ed3de39e3a80e4deb9e6ba4d9ff241a9eba1e3\"" Sep 13 01:56:46.846436 containerd[1514]: time="2025-09-13T01:56:46.846085932Z" level=info msg="CreateContainer within sandbox \"0089a0e7ba37de1c7aaec69faa1ab9f9cd4ee3fb9407aeba5ea2ed129f745214\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4ae27813407dcb93c132ac833aabe46cb2b14acf9549e718daaf92876421533d\"" Sep 13 01:56:46.846600 kubelet[2305]: I0913 01:56:46.846365 2305 kubelet_node_status.go:72] "Attempting to register node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:46.846600 kubelet[2305]: E0913 01:56:46.847342 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.52.214:6443/api/v1/nodes\": dial tcp 10.230.52.214:6443: connect: connection refused" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:46.849433 containerd[1514]: time="2025-09-13T01:56:46.848928505Z" level=info msg="StartContainer for \"4ae27813407dcb93c132ac833aabe46cb2b14acf9549e718daaf92876421533d\"" Sep 13 01:56:46.859048 systemd[1]: Started cri-containerd-196cda9aec45c3db5f800ae725fe0967b5845b1102277354a0667bd79b2dd754.scope - libcontainer container 196cda9aec45c3db5f800ae725fe0967b5845b1102277354a0667bd79b2dd754. Sep 13 01:56:46.879400 systemd[1]: Started cri-containerd-09bc6df24b4d94bba691cbebc0ed3de39e3a80e4deb9e6ba4d9ff241a9eba1e3.scope - libcontainer container 09bc6df24b4d94bba691cbebc0ed3de39e3a80e4deb9e6ba4d9ff241a9eba1e3. Sep 13 01:56:46.926419 systemd[1]: Started cri-containerd-4ae27813407dcb93c132ac833aabe46cb2b14acf9549e718daaf92876421533d.scope - libcontainer container 4ae27813407dcb93c132ac833aabe46cb2b14acf9549e718daaf92876421533d. Sep 13 01:56:46.992039 containerd[1514]: time="2025-09-13T01:56:46.991976263Z" level=info msg="StartContainer for \"196cda9aec45c3db5f800ae725fe0967b5845b1102277354a0667bd79b2dd754\" returns successfully" Sep 13 01:56:46.993699 containerd[1514]: time="2025-09-13T01:56:46.992108289Z" level=info msg="StartContainer for \"09bc6df24b4d94bba691cbebc0ed3de39e3a80e4deb9e6ba4d9ff241a9eba1e3\" returns successfully" Sep 13 01:56:47.030263 containerd[1514]: time="2025-09-13T01:56:47.029959324Z" level=info msg="StartContainer for \"4ae27813407dcb93c132ac833aabe46cb2b14acf9549e718daaf92876421533d\" returns successfully" Sep 13 01:56:47.373808 kubelet[2305]: E0913 01:56:47.372813 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.52.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.52.214:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:56:48.453364 kubelet[2305]: I0913 01:56:48.452635 2305 kubelet_node_status.go:72] "Attempting to register node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:50.058056 kubelet[2305]: E0913 01:56:50.057990 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-vx6h6.gb1.brightbox.com\" not found" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:50.096506 kubelet[2305]: E0913 01:56:50.096320 2305 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-vx6h6.gb1.brightbox.com.1864b4da852a76db default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-vx6h6.gb1.brightbox.com,UID:srv-vx6h6.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-vx6h6.gb1.brightbox.com,},FirstTimestamp:2025-09-13 01:56:45.211358939 +0000 UTC m=+0.861405100,LastTimestamp:2025-09-13 01:56:45.211358939 +0000 UTC m=+0.861405100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-vx6h6.gb1.brightbox.com,}" Sep 13 01:56:50.159092 kubelet[2305]: I0913 01:56:50.159023 2305 kubelet_node_status.go:75] "Successfully registered node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:50.159092 kubelet[2305]: E0913 01:56:50.159089 2305 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-vx6h6.gb1.brightbox.com\": node \"srv-vx6h6.gb1.brightbox.com\" not found" Sep 13 01:56:50.214991 kubelet[2305]: I0913 01:56:50.214939 2305 apiserver.go:52] "Watching apiserver" Sep 13 01:56:50.232327 kubelet[2305]: I0913 01:56:50.232294 2305 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:56:52.355324 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-11.scope)... Sep 13 01:56:52.355964 systemd[1]: Reloading... Sep 13 01:56:52.527305 zram_generator::config[2618]: No configuration found. Sep 13 01:56:52.750142 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:56:52.883471 systemd[1]: Reloading finished in 526 ms. Sep 13 01:56:52.956121 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:52.975148 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:56:52.975688 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:52.975821 systemd[1]: kubelet.service: Consumed 1.405s CPU time, 125.6M memory peak, 0B memory swap peak. Sep 13 01:56:52.983700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:56:53.285443 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:56:53.290045 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 01:56:53.393068 kubelet[2683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:56:53.393068 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 01:56:53.393068 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:56:53.394011 kubelet[2683]: I0913 01:56:53.393180 2683 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:56:53.408256 kubelet[2683]: I0913 01:56:53.406289 2683 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 01:56:53.408256 kubelet[2683]: I0913 01:56:53.406363 2683 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:56:53.408256 kubelet[2683]: I0913 01:56:53.406841 2683 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 01:56:53.413376 kubelet[2683]: I0913 01:56:53.413273 2683 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 01:56:53.424225 kubelet[2683]: I0913 01:56:53.423603 2683 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:56:53.436772 kubelet[2683]: E0913 01:56:53.436698 2683 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:56:53.436772 kubelet[2683]: I0913 01:56:53.436770 2683 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:56:53.443990 kubelet[2683]: I0913 01:56:53.443958 2683 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:56:53.444839 kubelet[2683]: I0913 01:56:53.444230 2683 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 01:56:53.444839 kubelet[2683]: I0913 01:56:53.444474 2683 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:56:53.444839 kubelet[2683]: I0913 01:56:53.444517 2683 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-vx6h6.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:56:53.444839 kubelet[2683]: I0913 01:56:53.444805 2683 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:56:53.445236 kubelet[2683]: I0913 01:56:53.444822 2683 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 01:56:53.445236 kubelet[2683]: I0913 01:56:53.444925 2683 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:56:53.445236 kubelet[2683]: I0913 01:56:53.445116 2683 kubelet.go:408] "Attempting to sync node with API server" Sep 13 01:56:53.445236 kubelet[2683]: I0913 01:56:53.445139 2683 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:56:53.445236 kubelet[2683]: I0913 01:56:53.445204 2683 kubelet.go:314] "Adding apiserver pod source" Sep 13 01:56:53.445236 kubelet[2683]: I0913 01:56:53.445228 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:56:53.448329 kubelet[2683]: I0913 01:56:53.447182 2683 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 01:56:53.451966 kubelet[2683]: I0913 01:56:53.451890 2683 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:56:53.455903 kubelet[2683]: I0913 01:56:53.454612 2683 server.go:1274] "Started kubelet" Sep 13 01:56:53.457451 kubelet[2683]: I0913 01:56:53.457303 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:56:53.471509 kubelet[2683]: I0913 01:56:53.469111 2683 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:56:53.471509 kubelet[2683]: I0913 01:56:53.470547 2683 server.go:449] "Adding debug handlers to kubelet server" Sep 13 01:56:53.487531 kubelet[2683]: I0913 01:56:53.486379 2683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:56:53.487531 kubelet[2683]: I0913 01:56:53.486789 2683 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:56:53.487531 kubelet[2683]: I0913 01:56:53.487176 2683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:56:53.506963 kubelet[2683]: I0913 01:56:53.506616 2683 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 01:56:53.506963 kubelet[2683]: E0913 01:56:53.506797 2683 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-vx6h6.gb1.brightbox.com\" not found" Sep 13 01:56:53.510593 kubelet[2683]: I0913 01:56:53.510312 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:56:53.511238 kubelet[2683]: I0913 01:56:53.511119 2683 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 01:56:53.511770 kubelet[2683]: I0913 01:56:53.511735 2683 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:56:53.517708 kubelet[2683]: I0913 01:56:53.515999 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:56:53.517708 kubelet[2683]: I0913 01:56:53.516052 2683 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 01:56:53.517708 kubelet[2683]: I0913 01:56:53.516085 2683 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 01:56:53.517708 kubelet[2683]: E0913 01:56:53.516163 2683 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:56:53.520151 kubelet[2683]: I0913 01:56:53.518692 2683 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:56:53.536200 kubelet[2683]: I0913 01:56:53.536147 2683 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:56:53.536200 kubelet[2683]: I0913 01:56:53.536177 2683 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:56:53.540038 kubelet[2683]: E0913 01:56:53.538482 2683 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:56:53.616841 kubelet[2683]: E0913 01:56:53.616652 2683 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 01:56:53.625367 kubelet[2683]: I0913 01:56:53.624887 2683 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 01:56:53.625367 kubelet[2683]: I0913 01:56:53.624911 2683 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 01:56:53.625367 kubelet[2683]: I0913 01:56:53.624945 2683 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:56:53.625367 kubelet[2683]: I0913 01:56:53.625211 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:56:53.625367 kubelet[2683]: I0913 01:56:53.625250 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:56:53.625367 kubelet[2683]: I0913 01:56:53.625291 2683 policy_none.go:49] "None policy: Start" Sep 13 01:56:53.627363 kubelet[2683]: I0913 01:56:53.627032 2683 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 01:56:53.627363 kubelet[2683]: I0913 01:56:53.627075 2683 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:56:53.627363 kubelet[2683]: I0913 01:56:53.627274 2683 state_mem.go:75] "Updated machine memory state" Sep 13 01:56:53.642296 kubelet[2683]: I0913 01:56:53.641933 2683 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:56:53.642789 kubelet[2683]: I0913 01:56:53.642768 2683 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:56:53.643253 kubelet[2683]: I0913 01:56:53.642948 2683 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:56:53.644287 kubelet[2683]: I0913 01:56:53.643812 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:56:53.778934 kubelet[2683]: I0913 01:56:53.778711 2683 kubelet_node_status.go:72] "Attempting to register node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.792065 kubelet[2683]: I0913 01:56:53.790823 2683 kubelet_node_status.go:111] "Node was previously registered" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.792065 kubelet[2683]: I0913 01:56:53.791025 2683 kubelet_node_status.go:75] "Successfully registered node" node="srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.838280 kubelet[2683]: W0913 01:56:53.837956 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:56:53.844812 kubelet[2683]: W0913 01:56:53.844131 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:56:53.845264 kubelet[2683]: W0913 01:56:53.845029 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:56:53.920006 kubelet[2683]: I0913 01:56:53.919438 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8cab408fedbfb523da1186cc722555-k8s-certs\") pod \"kube-apiserver-srv-vx6h6.gb1.brightbox.com\" (UID: \"4f8cab408fedbfb523da1186cc722555\") " pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920006 kubelet[2683]: I0913 01:56:53.919494 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8cab408fedbfb523da1186cc722555-usr-share-ca-certificates\") pod \"kube-apiserver-srv-vx6h6.gb1.brightbox.com\" (UID: \"4f8cab408fedbfb523da1186cc722555\") " pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920006 kubelet[2683]: I0913 01:56:53.919534 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-k8s-certs\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920006 kubelet[2683]: I0913 01:56:53.919575 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c983ebc85d6bfc9e75e5edb766fa3990-kubeconfig\") pod \"kube-scheduler-srv-vx6h6.gb1.brightbox.com\" (UID: \"c983ebc85d6bfc9e75e5edb766fa3990\") " pod="kube-system/kube-scheduler-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920006 kubelet[2683]: I0913 01:56:53.919602 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8cab408fedbfb523da1186cc722555-ca-certs\") pod \"kube-apiserver-srv-vx6h6.gb1.brightbox.com\" (UID: \"4f8cab408fedbfb523da1186cc722555\") " pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920893 kubelet[2683]: I0913 01:56:53.919629 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-ca-certs\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920893 kubelet[2683]: I0913 01:56:53.919654 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-flexvolume-dir\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920893 kubelet[2683]: I0913 01:56:53.919703 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-kubeconfig\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:53.920893 kubelet[2683]: I0913 01:56:53.919731 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3459550eb18f777f5979fd06a540246a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-vx6h6.gb1.brightbox.com\" (UID: \"3459550eb18f777f5979fd06a540246a\") " pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:54.462794 kubelet[2683]: I0913 01:56:54.461954 2683 apiserver.go:52] "Watching apiserver" Sep 13 01:56:54.512909 kubelet[2683]: I0913 01:56:54.512122 2683 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 01:56:54.584062 kubelet[2683]: W0913 01:56:54.584013 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:56:54.584278 kubelet[2683]: E0913 01:56:54.584137 2683 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-vx6h6.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" Sep 13 01:56:54.620089 kubelet[2683]: I0913 01:56:54.619977 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-vx6h6.gb1.brightbox.com" podStartSLOduration=1.619920777 podStartE2EDuration="1.619920777s" podCreationTimestamp="2025-09-13 01:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:56:54.615640113 +0000 UTC m=+1.301051152" watchObservedRunningTime="2025-09-13 01:56:54.619920777 +0000 UTC m=+1.305331823" Sep 13 01:56:54.631634 kubelet[2683]: I0913 01:56:54.631566 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-vx6h6.gb1.brightbox.com" podStartSLOduration=1.631537286 podStartE2EDuration="1.631537286s" podCreationTimestamp="2025-09-13 01:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:56:54.629549694 +0000 UTC m=+1.314960754" watchObservedRunningTime="2025-09-13 01:56:54.631537286 +0000 UTC m=+1.316948319" Sep 13 01:56:57.085021 kubelet[2683]: I0913 01:56:57.084956 2683 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:56:57.086475 kubelet[2683]: I0913 01:56:57.085951 2683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:56:57.086549 containerd[1514]: time="2025-09-13T01:56:57.085699383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:56:57.742064 kubelet[2683]: I0913 01:56:57.741323 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-vx6h6.gb1.brightbox.com" podStartSLOduration=4.741303836 podStartE2EDuration="4.741303836s" podCreationTimestamp="2025-09-13 01:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:56:54.643964011 +0000 UTC m=+1.329375066" watchObservedRunningTime="2025-09-13 01:56:57.741303836 +0000 UTC m=+4.426714889" Sep 13 01:56:57.759695 systemd[1]: Created slice kubepods-besteffort-pod7bc36d3b_fd3e_4140_ad45_e3fd7a8f24ff.slice - libcontainer container kubepods-besteffort-pod7bc36d3b_fd3e_4140_ad45_e3fd7a8f24ff.slice. Sep 13 01:56:57.843389 kubelet[2683]: I0913 01:56:57.843315 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff-xtables-lock\") pod \"kube-proxy-jnfhn\" (UID: \"7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff\") " pod="kube-system/kube-proxy-jnfhn" Sep 13 01:56:57.843389 kubelet[2683]: I0913 01:56:57.843391 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff-lib-modules\") pod \"kube-proxy-jnfhn\" (UID: \"7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff\") " pod="kube-system/kube-proxy-jnfhn" Sep 13 01:56:57.843690 kubelet[2683]: I0913 01:56:57.843426 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff-kube-proxy\") pod \"kube-proxy-jnfhn\" (UID: \"7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff\") " pod="kube-system/kube-proxy-jnfhn" Sep 13 01:56:57.843690 kubelet[2683]: I0913 01:56:57.843452 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdqs6\" (UniqueName: \"kubernetes.io/projected/7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff-kube-api-access-kdqs6\") pod \"kube-proxy-jnfhn\" (UID: \"7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff\") " pod="kube-system/kube-proxy-jnfhn" Sep 13 01:56:58.076538 containerd[1514]: time="2025-09-13T01:56:58.075614722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnfhn,Uid:7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff,Namespace:kube-system,Attempt:0,}" Sep 13 01:56:58.131608 containerd[1514]: time="2025-09-13T01:56:58.131390740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:56:58.131608 containerd[1514]: time="2025-09-13T01:56:58.131515422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:56:58.131608 containerd[1514]: time="2025-09-13T01:56:58.131554544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:58.134386 containerd[1514]: time="2025-09-13T01:56:58.134027064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:58.181364 systemd[1]: Started cri-containerd-e92ec47d4e08987d4dd8c8dd5de9757c35a3815e8fdb7c80acbcd1180d84ed9e.scope - libcontainer container e92ec47d4e08987d4dd8c8dd5de9757c35a3815e8fdb7c80acbcd1180d84ed9e. Sep 13 01:56:58.265165 systemd[1]: Created slice kubepods-besteffort-podc9f35844_72c2_417e_8d75_911ae3d1cb78.slice - libcontainer container kubepods-besteffort-podc9f35844_72c2_417e_8d75_911ae3d1cb78.slice. Sep 13 01:56:58.308105 containerd[1514]: time="2025-09-13T01:56:58.307508124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jnfhn,Uid:7bc36d3b-fd3e-4140-ad45-e3fd7a8f24ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"e92ec47d4e08987d4dd8c8dd5de9757c35a3815e8fdb7c80acbcd1180d84ed9e\"" Sep 13 01:56:58.316429 containerd[1514]: time="2025-09-13T01:56:58.316237232Z" level=info msg="CreateContainer within sandbox \"e92ec47d4e08987d4dd8c8dd5de9757c35a3815e8fdb7c80acbcd1180d84ed9e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:56:58.342429 containerd[1514]: time="2025-09-13T01:56:58.342176130Z" level=info msg="CreateContainer within sandbox \"e92ec47d4e08987d4dd8c8dd5de9757c35a3815e8fdb7c80acbcd1180d84ed9e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"529e38e390030ea5dbe87baeda85c2cdb148dcab5bf55d42a6099d2d098988ec\"" Sep 13 01:56:58.344770 containerd[1514]: time="2025-09-13T01:56:58.344740539Z" level=info msg="StartContainer for \"529e38e390030ea5dbe87baeda85c2cdb148dcab5bf55d42a6099d2d098988ec\"" Sep 13 01:56:58.347409 kubelet[2683]: I0913 01:56:58.347376 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56tmv\" (UniqueName: \"kubernetes.io/projected/c9f35844-72c2-417e-8d75-911ae3d1cb78-kube-api-access-56tmv\") pod \"tigera-operator-58fc44c59b-bz7sn\" (UID: \"c9f35844-72c2-417e-8d75-911ae3d1cb78\") " pod="tigera-operator/tigera-operator-58fc44c59b-bz7sn" Sep 13 01:56:58.348162 kubelet[2683]: I0913 01:56:58.347425 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c9f35844-72c2-417e-8d75-911ae3d1cb78-var-lib-calico\") pod \"tigera-operator-58fc44c59b-bz7sn\" (UID: \"c9f35844-72c2-417e-8d75-911ae3d1cb78\") " pod="tigera-operator/tigera-operator-58fc44c59b-bz7sn" Sep 13 01:56:58.405111 systemd[1]: Started cri-containerd-529e38e390030ea5dbe87baeda85c2cdb148dcab5bf55d42a6099d2d098988ec.scope - libcontainer container 529e38e390030ea5dbe87baeda85c2cdb148dcab5bf55d42a6099d2d098988ec. Sep 13 01:56:58.451681 containerd[1514]: time="2025-09-13T01:56:58.451439796Z" level=info msg="StartContainer for \"529e38e390030ea5dbe87baeda85c2cdb148dcab5bf55d42a6099d2d098988ec\" returns successfully" Sep 13 01:56:58.571831 containerd[1514]: time="2025-09-13T01:56:58.571631192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-bz7sn,Uid:c9f35844-72c2-417e-8d75-911ae3d1cb78,Namespace:tigera-operator,Attempt:0,}" Sep 13 01:56:58.610027 kubelet[2683]: I0913 01:56:58.609631 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jnfhn" podStartSLOduration=1.609572744 podStartE2EDuration="1.609572744s" podCreationTimestamp="2025-09-13 01:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:56:58.608298745 +0000 UTC m=+5.293709808" watchObservedRunningTime="2025-09-13 01:56:58.609572744 +0000 UTC m=+5.294983784" Sep 13 01:56:58.626471 containerd[1514]: time="2025-09-13T01:56:58.624700039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:56:58.626471 containerd[1514]: time="2025-09-13T01:56:58.626089024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:56:58.626471 containerd[1514]: time="2025-09-13T01:56:58.626110217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:58.626471 containerd[1514]: time="2025-09-13T01:56:58.626241400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:56:58.659422 systemd[1]: Started cri-containerd-babc9ed4cff993035d37c673c601c75efb739e3c6063e2680f8da3d1082a203f.scope - libcontainer container babc9ed4cff993035d37c673c601c75efb739e3c6063e2680f8da3d1082a203f. Sep 13 01:56:58.753333 containerd[1514]: time="2025-09-13T01:56:58.752762287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-bz7sn,Uid:c9f35844-72c2-417e-8d75-911ae3d1cb78,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"babc9ed4cff993035d37c673c601c75efb739e3c6063e2680f8da3d1082a203f\"" Sep 13 01:56:58.764732 containerd[1514]: time="2025-09-13T01:56:58.763959537Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 01:56:58.972672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904486942.mount: Deactivated successfully. Sep 13 01:57:00.978466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198725669.mount: Deactivated successfully. Sep 13 01:57:01.898222 containerd[1514]: time="2025-09-13T01:57:01.896125859Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:01.898222 containerd[1514]: time="2025-09-13T01:57:01.897428530Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 01:57:01.899722 containerd[1514]: time="2025-09-13T01:57:01.899689706Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:01.903050 containerd[1514]: time="2025-09-13T01:57:01.903017919Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:01.904472 containerd[1514]: time="2025-09-13T01:57:01.904428998Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.140013656s" Sep 13 01:57:01.904628 containerd[1514]: time="2025-09-13T01:57:01.904602818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 01:57:01.910082 containerd[1514]: time="2025-09-13T01:57:01.910028128Z" level=info msg="CreateContainer within sandbox \"babc9ed4cff993035d37c673c601c75efb739e3c6063e2680f8da3d1082a203f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 01:57:01.928024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount309362480.mount: Deactivated successfully. Sep 13 01:57:01.930916 containerd[1514]: time="2025-09-13T01:57:01.930871819Z" level=info msg="CreateContainer within sandbox \"babc9ed4cff993035d37c673c601c75efb739e3c6063e2680f8da3d1082a203f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5fabed7aefd6aafecc10d6bd189864a36df9c4bd0204203ed59bafd86e8a4968\"" Sep 13 01:57:01.932029 containerd[1514]: time="2025-09-13T01:57:01.931988793Z" level=info msg="StartContainer for \"5fabed7aefd6aafecc10d6bd189864a36df9c4bd0204203ed59bafd86e8a4968\"" Sep 13 01:57:01.983419 systemd[1]: Started cri-containerd-5fabed7aefd6aafecc10d6bd189864a36df9c4bd0204203ed59bafd86e8a4968.scope - libcontainer container 5fabed7aefd6aafecc10d6bd189864a36df9c4bd0204203ed59bafd86e8a4968. Sep 13 01:57:02.025830 containerd[1514]: time="2025-09-13T01:57:02.025458881Z" level=info msg="StartContainer for \"5fabed7aefd6aafecc10d6bd189864a36df9c4bd0204203ed59bafd86e8a4968\" returns successfully" Sep 13 01:57:08.142704 sudo[1782]: pam_unix(sudo:session): session closed for user root Sep 13 01:57:08.295463 sshd[1779]: pam_unix(sshd:session): session closed for user core Sep 13 01:57:08.307429 systemd[1]: sshd@8-10.230.52.214:22-139.178.68.195:47506.service: Deactivated successfully. Sep 13 01:57:08.318041 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:57:08.319394 systemd[1]: session-11.scope: Consumed 6.903s CPU time, 139.7M memory peak, 0B memory swap peak. Sep 13 01:57:08.323301 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:57:08.325958 systemd-logind[1495]: Removed session 11. Sep 13 01:57:13.290272 kubelet[2683]: I0913 01:57:13.288347 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-bz7sn" podStartSLOduration=12.135516922 podStartE2EDuration="15.285567095s" podCreationTimestamp="2025-09-13 01:56:58 +0000 UTC" firstStartedPulling="2025-09-13 01:56:58.756579862 +0000 UTC m=+5.441990897" lastFinishedPulling="2025-09-13 01:57:01.906630031 +0000 UTC m=+8.592041070" observedRunningTime="2025-09-13 01:57:02.613721272 +0000 UTC m=+9.299132339" watchObservedRunningTime="2025-09-13 01:57:13.285567095 +0000 UTC m=+19.970978143" Sep 13 01:57:13.313945 systemd[1]: Created slice kubepods-besteffort-pod14e19f6b_eff6_42ce_be80_e50bcb8a151e.slice - libcontainer container kubepods-besteffort-pod14e19f6b_eff6_42ce_be80_e50bcb8a151e.slice. Sep 13 01:57:13.347129 kubelet[2683]: I0913 01:57:13.346907 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqlh\" (UniqueName: \"kubernetes.io/projected/14e19f6b-eff6-42ce-be80-e50bcb8a151e-kube-api-access-2zqlh\") pod \"calico-typha-5ff866589b-gqfqz\" (UID: \"14e19f6b-eff6-42ce-be80-e50bcb8a151e\") " pod="calico-system/calico-typha-5ff866589b-gqfqz" Sep 13 01:57:13.347129 kubelet[2683]: I0913 01:57:13.346979 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14e19f6b-eff6-42ce-be80-e50bcb8a151e-tigera-ca-bundle\") pod \"calico-typha-5ff866589b-gqfqz\" (UID: \"14e19f6b-eff6-42ce-be80-e50bcb8a151e\") " pod="calico-system/calico-typha-5ff866589b-gqfqz" Sep 13 01:57:13.347129 kubelet[2683]: I0913 01:57:13.347017 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/14e19f6b-eff6-42ce-be80-e50bcb8a151e-typha-certs\") pod \"calico-typha-5ff866589b-gqfqz\" (UID: \"14e19f6b-eff6-42ce-be80-e50bcb8a151e\") " pod="calico-system/calico-typha-5ff866589b-gqfqz" Sep 13 01:57:13.627425 containerd[1514]: time="2025-09-13T01:57:13.627149373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ff866589b-gqfqz,Uid:14e19f6b-eff6-42ce-be80-e50bcb8a151e,Namespace:calico-system,Attempt:0,}" Sep 13 01:57:13.691522 systemd[1]: Created slice kubepods-besteffort-pod81eef6cf_57cf_4725_8ef2_214c32bc7826.slice - libcontainer container kubepods-besteffort-pod81eef6cf_57cf_4725_8ef2_214c32bc7826.slice. Sep 13 01:57:13.757590 containerd[1514]: time="2025-09-13T01:57:13.755661443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:13.757590 containerd[1514]: time="2025-09-13T01:57:13.755806340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:13.757590 containerd[1514]: time="2025-09-13T01:57:13.755840384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:13.757590 containerd[1514]: time="2025-09-13T01:57:13.756058944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:13.854247 kubelet[2683]: I0913 01:57:13.850537 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-policysync\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854247 kubelet[2683]: I0913 01:57:13.850610 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-cni-net-dir\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854247 kubelet[2683]: I0913 01:57:13.850646 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-flexvol-driver-host\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854247 kubelet[2683]: I0913 01:57:13.850675 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-lib-modules\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854247 kubelet[2683]: I0913 01:57:13.850705 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81eef6cf-57cf-4725-8ef2-214c32bc7826-node-certs\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854609 kubelet[2683]: I0913 01:57:13.850729 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81eef6cf-57cf-4725-8ef2-214c32bc7826-tigera-ca-bundle\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854609 kubelet[2683]: I0913 01:57:13.850755 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-xtables-lock\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854609 kubelet[2683]: I0913 01:57:13.850780 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-cni-bin-dir\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854609 kubelet[2683]: I0913 01:57:13.850819 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-var-lib-calico\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854609 kubelet[2683]: I0913 01:57:13.850854 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-var-run-calico\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854880 kubelet[2683]: I0913 01:57:13.850879 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81eef6cf-57cf-4725-8ef2-214c32bc7826-cni-log-dir\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.854880 kubelet[2683]: I0913 01:57:13.850905 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x24dg\" (UniqueName: \"kubernetes.io/projected/81eef6cf-57cf-4725-8ef2-214c32bc7826-kube-api-access-x24dg\") pod \"calico-node-ftbch\" (UID: \"81eef6cf-57cf-4725-8ef2-214c32bc7826\") " pod="calico-system/calico-node-ftbch" Sep 13 01:57:13.875987 systemd[1]: Started cri-containerd-cc38461edef200094a216a44376058686a198f5a1374d8afc63fd0bf15b20bc9.scope - libcontainer container cc38461edef200094a216a44376058686a198f5a1374d8afc63fd0bf15b20bc9. Sep 13 01:57:13.885012 kubelet[2683]: E0913 01:57:13.884886 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:13.953221 kubelet[2683]: I0913 01:57:13.952807 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/96f62ef8-50ce-46be-8601-56da0c0ae5a1-registration-dir\") pod \"csi-node-driver-z8zdz\" (UID: \"96f62ef8-50ce-46be-8601-56da0c0ae5a1\") " pod="calico-system/csi-node-driver-z8zdz" Sep 13 01:57:13.953221 kubelet[2683]: I0913 01:57:13.952943 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/96f62ef8-50ce-46be-8601-56da0c0ae5a1-varrun\") pod \"csi-node-driver-z8zdz\" (UID: \"96f62ef8-50ce-46be-8601-56da0c0ae5a1\") " pod="calico-system/csi-node-driver-z8zdz" Sep 13 01:57:13.953221 kubelet[2683]: I0913 01:57:13.952976 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbq4z\" (UniqueName: \"kubernetes.io/projected/96f62ef8-50ce-46be-8601-56da0c0ae5a1-kube-api-access-bbq4z\") pod \"csi-node-driver-z8zdz\" (UID: \"96f62ef8-50ce-46be-8601-56da0c0ae5a1\") " pod="calico-system/csi-node-driver-z8zdz" Sep 13 01:57:13.953221 kubelet[2683]: I0913 01:57:13.953026 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/96f62ef8-50ce-46be-8601-56da0c0ae5a1-kubelet-dir\") pod \"csi-node-driver-z8zdz\" (UID: \"96f62ef8-50ce-46be-8601-56da0c0ae5a1\") " pod="calico-system/csi-node-driver-z8zdz" Sep 13 01:57:13.953221 kubelet[2683]: I0913 01:57:13.953074 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/96f62ef8-50ce-46be-8601-56da0c0ae5a1-socket-dir\") pod \"csi-node-driver-z8zdz\" (UID: \"96f62ef8-50ce-46be-8601-56da0c0ae5a1\") " pod="calico-system/csi-node-driver-z8zdz" Sep 13 01:57:13.966752 kubelet[2683]: E0913 01:57:13.966339 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.966752 kubelet[2683]: W0913 01:57:13.966387 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.966752 kubelet[2683]: E0913 01:57:13.966450 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.969736 kubelet[2683]: E0913 01:57:13.969520 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.969736 kubelet[2683]: W0913 01:57:13.969540 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.969736 kubelet[2683]: E0913 01:57:13.969557 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.976362 kubelet[2683]: E0913 01:57:13.971388 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.976362 kubelet[2683]: W0913 01:57:13.971413 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.977242 kubelet[2683]: E0913 01:57:13.971429 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.979346 kubelet[2683]: E0913 01:57:13.979269 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.979532 kubelet[2683]: W0913 01:57:13.979472 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.980443 kubelet[2683]: E0913 01:57:13.980285 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.981162 kubelet[2683]: E0913 01:57:13.981031 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.981162 kubelet[2683]: W0913 01:57:13.981050 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.981523 kubelet[2683]: E0913 01:57:13.981227 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.983400 kubelet[2683]: E0913 01:57:13.982827 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.983400 kubelet[2683]: W0913 01:57:13.982854 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.984717 kubelet[2683]: E0913 01:57:13.984697 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.984974 kubelet[2683]: W0913 01:57:13.984767 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.987490 kubelet[2683]: E0913 01:57:13.987366 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.987490 kubelet[2683]: W0913 01:57:13.987386 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.987865 kubelet[2683]: E0913 01:57:13.987754 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.987865 kubelet[2683]: E0913 01:57:13.987812 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.987865 kubelet[2683]: E0913 01:57:13.987837 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.988377 kubelet[2683]: E0913 01:57:13.988093 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.988377 kubelet[2683]: W0913 01:57:13.988111 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.988377 kubelet[2683]: E0913 01:57:13.988128 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.989814 kubelet[2683]: E0913 01:57:13.989493 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.989814 kubelet[2683]: W0913 01:57:13.989512 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.989814 kubelet[2683]: E0913 01:57:13.989528 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.990530 kubelet[2683]: E0913 01:57:13.990337 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.990530 kubelet[2683]: W0913 01:57:13.990356 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.990530 kubelet[2683]: E0913 01:57:13.990373 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.991740 kubelet[2683]: E0913 01:57:13.991566 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.991740 kubelet[2683]: W0913 01:57:13.991594 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.991740 kubelet[2683]: E0913 01:57:13.991612 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.992925 kubelet[2683]: E0913 01:57:13.992370 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.992925 kubelet[2683]: W0913 01:57:13.992432 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.992925 kubelet[2683]: E0913 01:57:13.992459 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.993359 kubelet[2683]: E0913 01:57:13.993341 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.993602 kubelet[2683]: W0913 01:57:13.993582 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.993864 kubelet[2683]: E0913 01:57:13.993719 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.994526 kubelet[2683]: E0913 01:57:13.994507 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.994734 kubelet[2683]: W0913 01:57:13.994673 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.994734 kubelet[2683]: E0913 01:57:13.994703 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:13.995503 kubelet[2683]: E0913 01:57:13.995379 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:13.995503 kubelet[2683]: W0913 01:57:13.995439 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:13.995503 kubelet[2683]: E0913 01:57:13.995470 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.002208 containerd[1514]: time="2025-09-13T01:57:14.001606364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ftbch,Uid:81eef6cf-57cf-4725-8ef2-214c32bc7826,Namespace:calico-system,Attempt:0,}" Sep 13 01:57:14.055370 kubelet[2683]: E0913 01:57:14.055324 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.055622 kubelet[2683]: W0913 01:57:14.055597 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.057257 kubelet[2683]: E0913 01:57:14.057231 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.057792 kubelet[2683]: E0913 01:57:14.057751 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.057792 kubelet[2683]: W0913 01:57:14.057769 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.058256 kubelet[2683]: E0913 01:57:14.057988 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.058444 kubelet[2683]: E0913 01:57:14.058426 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.058775 kubelet[2683]: W0913 01:57:14.058532 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.059396 kubelet[2683]: E0913 01:57:14.059339 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.060313 kubelet[2683]: E0913 01:57:14.059502 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.060313 kubelet[2683]: W0913 01:57:14.059530 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.060313 kubelet[2683]: E0913 01:57:14.060261 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.061408 kubelet[2683]: E0913 01:57:14.060873 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.061408 kubelet[2683]: W0913 01:57:14.060889 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.062027 kubelet[2683]: E0913 01:57:14.061847 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.063415 kubelet[2683]: E0913 01:57:14.063289 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.063415 kubelet[2683]: W0913 01:57:14.063309 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.063719 kubelet[2683]: E0913 01:57:14.063581 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.063988 kubelet[2683]: E0913 01:57:14.063867 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.063988 kubelet[2683]: W0913 01:57:14.063885 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.064301 kubelet[2683]: E0913 01:57:14.064121 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.064752 kubelet[2683]: E0913 01:57:14.064722 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.065435 kubelet[2683]: W0913 01:57:14.065227 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.065435 kubelet[2683]: E0913 01:57:14.065347 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.067313 kubelet[2683]: E0913 01:57:14.066181 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.067313 kubelet[2683]: W0913 01:57:14.066219 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.067313 kubelet[2683]: E0913 01:57:14.067240 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.068232 containerd[1514]: time="2025-09-13T01:57:14.065622241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:14.068232 containerd[1514]: time="2025-09-13T01:57:14.067513910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:14.068232 containerd[1514]: time="2025-09-13T01:57:14.067539086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:14.068414 kubelet[2683]: E0913 01:57:14.067850 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.068414 kubelet[2683]: W0913 01:57:14.067864 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.068414 kubelet[2683]: E0913 01:57:14.067957 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.069761 kubelet[2683]: E0913 01:57:14.069159 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.069761 kubelet[2683]: W0913 01:57:14.069178 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.071302 kubelet[2683]: E0913 01:57:14.071099 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.071302 kubelet[2683]: E0913 01:57:14.071219 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.071302 kubelet[2683]: W0913 01:57:14.071262 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.071682 containerd[1514]: time="2025-09-13T01:57:14.068220213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:14.071757 kubelet[2683]: E0913 01:57:14.071591 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.078372 kubelet[2683]: E0913 01:57:14.073508 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.078372 kubelet[2683]: W0913 01:57:14.073526 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.078372 kubelet[2683]: E0913 01:57:14.078237 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.078743 kubelet[2683]: E0913 01:57:14.078721 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.078989 kubelet[2683]: W0913 01:57:14.078826 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.079111 kubelet[2683]: E0913 01:57:14.079074 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.079425 kubelet[2683]: E0913 01:57:14.079407 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.079771 kubelet[2683]: W0913 01:57:14.079519 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.079771 kubelet[2683]: E0913 01:57:14.079574 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.081666 kubelet[2683]: E0913 01:57:14.080287 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.081666 kubelet[2683]: W0913 01:57:14.080306 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.081666 kubelet[2683]: E0913 01:57:14.081387 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.081666 kubelet[2683]: E0913 01:57:14.081510 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.081666 kubelet[2683]: W0913 01:57:14.081523 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.081666 kubelet[2683]: E0913 01:57:14.081558 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.082226 kubelet[2683]: E0913 01:57:14.082046 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.083225 kubelet[2683]: W0913 01:57:14.082340 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.083225 kubelet[2683]: E0913 01:57:14.082621 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.083441 kubelet[2683]: E0913 01:57:14.083421 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.084306 kubelet[2683]: W0913 01:57:14.083539 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.084306 kubelet[2683]: E0913 01:57:14.083640 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.084599 kubelet[2683]: E0913 01:57:14.084579 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.086229 kubelet[2683]: W0913 01:57:14.084698 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.086398 kubelet[2683]: E0913 01:57:14.086343 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.086586 kubelet[2683]: E0913 01:57:14.086568 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.086904 kubelet[2683]: W0913 01:57:14.086684 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.086904 kubelet[2683]: E0913 01:57:14.086743 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.087238 kubelet[2683]: E0913 01:57:14.087218 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.088235 kubelet[2683]: W0913 01:57:14.087428 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.088563 kubelet[2683]: E0913 01:57:14.088529 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.089412 kubelet[2683]: E0913 01:57:14.089272 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.089412 kubelet[2683]: W0913 01:57:14.089296 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.089412 kubelet[2683]: E0913 01:57:14.089330 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.094237 kubelet[2683]: E0913 01:57:14.092502 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.094237 kubelet[2683]: W0913 01:57:14.092522 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.094237 kubelet[2683]: E0913 01:57:14.092548 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.094237 kubelet[2683]: E0913 01:57:14.093277 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.094237 kubelet[2683]: W0913 01:57:14.093300 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.094237 kubelet[2683]: E0913 01:57:14.093317 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.124871 kubelet[2683]: E0913 01:57:14.124838 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:14.125064 kubelet[2683]: W0913 01:57:14.125040 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:14.125183 kubelet[2683]: E0913 01:57:14.125162 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:14.127491 systemd[1]: Started cri-containerd-8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498.scope - libcontainer container 8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498. Sep 13 01:57:14.184365 containerd[1514]: time="2025-09-13T01:57:14.181920774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5ff866589b-gqfqz,Uid:14e19f6b-eff6-42ce-be80-e50bcb8a151e,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc38461edef200094a216a44376058686a198f5a1374d8afc63fd0bf15b20bc9\"" Sep 13 01:57:14.196591 containerd[1514]: time="2025-09-13T01:57:14.196470470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 01:57:14.221837 containerd[1514]: time="2025-09-13T01:57:14.221395998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ftbch,Uid:81eef6cf-57cf-4725-8ef2-214c32bc7826,Namespace:calico-system,Attempt:0,} returns sandbox id \"8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498\"" Sep 13 01:57:15.518327 kubelet[2683]: E0913 01:57:15.517431 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:15.872844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035350493.mount: Deactivated successfully. Sep 13 01:57:17.522070 kubelet[2683]: E0913 01:57:17.521891 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:17.591307 containerd[1514]: time="2025-09-13T01:57:17.590410462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:17.611918 containerd[1514]: time="2025-09-13T01:57:17.611070615Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:17.611918 containerd[1514]: time="2025-09-13T01:57:17.611242828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 01:57:17.614852 containerd[1514]: time="2025-09-13T01:57:17.614077404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:17.616448 containerd[1514]: time="2025-09-13T01:57:17.615967530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.419436457s" Sep 13 01:57:17.616600 containerd[1514]: time="2025-09-13T01:57:17.616570729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 01:57:17.621623 containerd[1514]: time="2025-09-13T01:57:17.621582643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 01:57:17.660709 containerd[1514]: time="2025-09-13T01:57:17.660629133Z" level=info msg="CreateContainer within sandbox \"cc38461edef200094a216a44376058686a198f5a1374d8afc63fd0bf15b20bc9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 01:57:17.690127 containerd[1514]: time="2025-09-13T01:57:17.689665495Z" level=info msg="CreateContainer within sandbox \"cc38461edef200094a216a44376058686a198f5a1374d8afc63fd0bf15b20bc9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6e7bd8285abca7741868fb57a68f2c6cecce08a75aac797883f96f8cdcaf7958\"" Sep 13 01:57:17.691284 containerd[1514]: time="2025-09-13T01:57:17.690673006Z" level=info msg="StartContainer for \"6e7bd8285abca7741868fb57a68f2c6cecce08a75aac797883f96f8cdcaf7958\"" Sep 13 01:57:17.787493 systemd[1]: Started cri-containerd-6e7bd8285abca7741868fb57a68f2c6cecce08a75aac797883f96f8cdcaf7958.scope - libcontainer container 6e7bd8285abca7741868fb57a68f2c6cecce08a75aac797883f96f8cdcaf7958. Sep 13 01:57:17.872999 containerd[1514]: time="2025-09-13T01:57:17.871937490Z" level=info msg="StartContainer for \"6e7bd8285abca7741868fb57a68f2c6cecce08a75aac797883f96f8cdcaf7958\" returns successfully" Sep 13 01:57:18.692117 kubelet[2683]: I0913 01:57:18.692019 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5ff866589b-gqfqz" podStartSLOduration=2.2658605769999998 podStartE2EDuration="5.691989136s" podCreationTimestamp="2025-09-13 01:57:13 +0000 UTC" firstStartedPulling="2025-09-13 01:57:14.195284419 +0000 UTC m=+20.880695456" lastFinishedPulling="2025-09-13 01:57:17.621412979 +0000 UTC m=+24.306824015" observedRunningTime="2025-09-13 01:57:18.687992184 +0000 UTC m=+25.373403240" watchObservedRunningTime="2025-09-13 01:57:18.691989136 +0000 UTC m=+25.377400183" Sep 13 01:57:18.694890 kubelet[2683]: E0913 01:57:18.694857 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.695035 kubelet[2683]: W0913 01:57:18.694895 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.695035 kubelet[2683]: E0913 01:57:18.694952 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.695467 kubelet[2683]: E0913 01:57:18.695408 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.695467 kubelet[2683]: W0913 01:57:18.695463 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.695987 kubelet[2683]: E0913 01:57:18.695533 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.696063 kubelet[2683]: E0913 01:57:18.695988 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.696063 kubelet[2683]: W0913 01:57:18.696002 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.696063 kubelet[2683]: E0913 01:57:18.696017 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.696526 kubelet[2683]: E0913 01:57:18.696492 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.696526 kubelet[2683]: W0913 01:57:18.696518 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.697120 kubelet[2683]: E0913 01:57:18.696534 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.697120 kubelet[2683]: E0913 01:57:18.697016 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.697667 kubelet[2683]: W0913 01:57:18.697321 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.697667 kubelet[2683]: E0913 01:57:18.697348 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.698214 kubelet[2683]: E0913 01:57:18.698011 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.698214 kubelet[2683]: W0913 01:57:18.698036 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.698214 kubelet[2683]: E0913 01:57:18.698066 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.698875 kubelet[2683]: E0913 01:57:18.698644 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.698875 kubelet[2683]: W0913 01:57:18.698662 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.698875 kubelet[2683]: E0913 01:57:18.698678 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.699324 kubelet[2683]: E0913 01:57:18.699011 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.699324 kubelet[2683]: W0913 01:57:18.699032 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.699324 kubelet[2683]: E0913 01:57:18.699048 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.699614 kubelet[2683]: E0913 01:57:18.699589 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.699784 kubelet[2683]: W0913 01:57:18.699715 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.699784 kubelet[2683]: E0913 01:57:18.699739 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.700414 kubelet[2683]: E0913 01:57:18.700257 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.700414 kubelet[2683]: W0913 01:57:18.700286 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.700414 kubelet[2683]: E0913 01:57:18.700302 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.701089 kubelet[2683]: E0913 01:57:18.700920 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.701089 kubelet[2683]: W0913 01:57:18.700938 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.701089 kubelet[2683]: E0913 01:57:18.700954 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.701492 kubelet[2683]: E0913 01:57:18.701288 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.701492 kubelet[2683]: W0913 01:57:18.701301 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.701492 kubelet[2683]: E0913 01:57:18.701315 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.702263 kubelet[2683]: E0913 01:57:18.702060 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.702263 kubelet[2683]: W0913 01:57:18.702080 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.702263 kubelet[2683]: E0913 01:57:18.702106 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.702963 kubelet[2683]: E0913 01:57:18.702545 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.702963 kubelet[2683]: W0913 01:57:18.702558 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.702963 kubelet[2683]: E0913 01:57:18.702573 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.702963 kubelet[2683]: E0913 01:57:18.702858 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.703243 kubelet[2683]: W0913 01:57:18.703155 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.703243 kubelet[2683]: E0913 01:57:18.703184 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.707560 kubelet[2683]: E0913 01:57:18.707331 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.707560 kubelet[2683]: W0913 01:57:18.707352 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.707560 kubelet[2683]: E0913 01:57:18.707369 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.707898 kubelet[2683]: E0913 01:57:18.707877 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.708006 kubelet[2683]: W0913 01:57:18.707986 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.708268 kubelet[2683]: E0913 01:57:18.708117 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.708612 kubelet[2683]: E0913 01:57:18.708593 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.708705 kubelet[2683]: W0913 01:57:18.708685 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.708930 kubelet[2683]: E0913 01:57:18.708827 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.709350 kubelet[2683]: E0913 01:57:18.709262 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.709350 kubelet[2683]: W0913 01:57:18.709280 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.709350 kubelet[2683]: E0913 01:57:18.709323 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.709911 kubelet[2683]: E0913 01:57:18.709748 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.709911 kubelet[2683]: W0913 01:57:18.709765 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.709911 kubelet[2683]: E0913 01:57:18.709807 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.710296 kubelet[2683]: E0913 01:57:18.710173 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.710296 kubelet[2683]: W0913 01:57:18.710190 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.710412 kubelet[2683]: E0913 01:57:18.710321 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.710894 kubelet[2683]: E0913 01:57:18.710688 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.710894 kubelet[2683]: W0913 01:57:18.710705 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.710894 kubelet[2683]: E0913 01:57:18.710751 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.711160 kubelet[2683]: E0913 01:57:18.711143 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.711406 kubelet[2683]: W0913 01:57:18.711254 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.711406 kubelet[2683]: E0913 01:57:18.711288 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.711585 kubelet[2683]: E0913 01:57:18.711566 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.711655 kubelet[2683]: W0913 01:57:18.711586 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.711655 kubelet[2683]: E0913 01:57:18.711610 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.711987 kubelet[2683]: E0913 01:57:18.711925 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.711987 kubelet[2683]: W0913 01:57:18.711939 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.711987 kubelet[2683]: E0913 01:57:18.711953 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.712434 kubelet[2683]: E0913 01:57:18.712262 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.712434 kubelet[2683]: W0913 01:57:18.712276 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.712434 kubelet[2683]: E0913 01:57:18.712308 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.712691 kubelet[2683]: E0913 01:57:18.712549 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.712691 kubelet[2683]: W0913 01:57:18.712562 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.713180 kubelet[2683]: E0913 01:57:18.712825 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.713180 kubelet[2683]: W0913 01:57:18.712846 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.713180 kubelet[2683]: E0913 01:57:18.712861 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.713180 kubelet[2683]: E0913 01:57:18.713151 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.713480 kubelet[2683]: E0913 01:57:18.713363 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.713480 kubelet[2683]: W0913 01:57:18.713377 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.713480 kubelet[2683]: E0913 01:57:18.713399 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.713692 kubelet[2683]: E0913 01:57:18.713673 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.713692 kubelet[2683]: W0913 01:57:18.713692 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.713880 kubelet[2683]: E0913 01:57:18.713715 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.714006 kubelet[2683]: E0913 01:57:18.713989 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.714054 kubelet[2683]: W0913 01:57:18.714006 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.714054 kubelet[2683]: E0913 01:57:18.714021 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.714337 kubelet[2683]: E0913 01:57:18.714320 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.714337 kubelet[2683]: W0913 01:57:18.714336 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.714466 kubelet[2683]: E0913 01:57:18.714350 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:18.715328 kubelet[2683]: E0913 01:57:18.715308 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 01:57:18.715328 kubelet[2683]: W0913 01:57:18.715327 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 01:57:18.715465 kubelet[2683]: E0913 01:57:18.715343 2683 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 01:57:19.233772 containerd[1514]: time="2025-09-13T01:57:19.233376929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:19.236041 containerd[1514]: time="2025-09-13T01:57:19.235987170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 01:57:19.237120 containerd[1514]: time="2025-09-13T01:57:19.237051355Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:19.240226 containerd[1514]: time="2025-09-13T01:57:19.240053010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:19.242662 containerd[1514]: time="2025-09-13T01:57:19.241092014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.619464346s" Sep 13 01:57:19.242662 containerd[1514]: time="2025-09-13T01:57:19.241139557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 01:57:19.244325 containerd[1514]: time="2025-09-13T01:57:19.244291662Z" level=info msg="CreateContainer within sandbox \"8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 01:57:19.277739 containerd[1514]: time="2025-09-13T01:57:19.277663742Z" level=info msg="CreateContainer within sandbox \"8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334\"" Sep 13 01:57:19.281226 containerd[1514]: time="2025-09-13T01:57:19.278435097Z" level=info msg="StartContainer for \"36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334\"" Sep 13 01:57:19.332418 systemd[1]: Started cri-containerd-36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334.scope - libcontainer container 36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334. Sep 13 01:57:19.385005 containerd[1514]: time="2025-09-13T01:57:19.384946709Z" level=info msg="StartContainer for \"36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334\" returns successfully" Sep 13 01:57:19.407174 systemd[1]: cri-containerd-36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334.scope: Deactivated successfully. Sep 13 01:57:19.519250 kubelet[2683]: E0913 01:57:19.517395 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:19.642221 containerd[1514]: time="2025-09-13T01:57:19.622041045Z" level=info msg="shim disconnected" id=36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334 namespace=k8s.io Sep 13 01:57:19.644873 containerd[1514]: time="2025-09-13T01:57:19.642492842Z" level=warning msg="cleaning up after shim disconnected" id=36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334 namespace=k8s.io Sep 13 01:57:19.644873 containerd[1514]: time="2025-09-13T01:57:19.642531376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:57:19.643311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36a2be9442284e377214008a20fc99599faea52f8bf98481bd4394acb66b3334-rootfs.mount: Deactivated successfully. Sep 13 01:57:19.678010 kubelet[2683]: I0913 01:57:19.677972 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:57:19.681184 containerd[1514]: time="2025-09-13T01:57:19.680898738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 01:57:21.518273 kubelet[2683]: E0913 01:57:21.517513 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:23.518734 kubelet[2683]: E0913 01:57:23.518637 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:25.167465 containerd[1514]: time="2025-09-13T01:57:25.167372551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:25.169753 containerd[1514]: time="2025-09-13T01:57:25.169406089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 01:57:25.170127 containerd[1514]: time="2025-09-13T01:57:25.170059903Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:25.174380 containerd[1514]: time="2025-09-13T01:57:25.173855047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:25.175074 containerd[1514]: time="2025-09-13T01:57:25.175033876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.494072911s" Sep 13 01:57:25.175175 containerd[1514]: time="2025-09-13T01:57:25.175083841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 01:57:25.181441 containerd[1514]: time="2025-09-13T01:57:25.181384277Z" level=info msg="CreateContainer within sandbox \"8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 01:57:25.209086 containerd[1514]: time="2025-09-13T01:57:25.209038286Z" level=info msg="CreateContainer within sandbox \"8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164\"" Sep 13 01:57:25.213340 containerd[1514]: time="2025-09-13T01:57:25.213271414Z" level=info msg="StartContainer for \"c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164\"" Sep 13 01:57:25.283491 systemd[1]: Started cri-containerd-c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164.scope - libcontainer container c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164. Sep 13 01:57:25.344288 containerd[1514]: time="2025-09-13T01:57:25.344230979Z" level=info msg="StartContainer for \"c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164\" returns successfully" Sep 13 01:57:25.518546 kubelet[2683]: E0913 01:57:25.517087 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:26.426330 systemd[1]: cri-containerd-c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164.scope: Deactivated successfully. Sep 13 01:57:26.477164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164-rootfs.mount: Deactivated successfully. Sep 13 01:57:26.502467 kubelet[2683]: I0913 01:57:26.480497 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:57:26.502467 kubelet[2683]: I0913 01:57:26.483895 2683 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 01:57:26.561476 containerd[1514]: time="2025-09-13T01:57:26.561314737Z" level=info msg="shim disconnected" id=c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164 namespace=k8s.io Sep 13 01:57:26.562117 containerd[1514]: time="2025-09-13T01:57:26.561472869Z" level=warning msg="cleaning up after shim disconnected" id=c5b2e6a1bf58e0979c67abce4d78bdd156bbe3545501d523f8fc07d0f1eeb164 namespace=k8s.io Sep 13 01:57:26.562117 containerd[1514]: time="2025-09-13T01:57:26.561510070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:57:26.626732 systemd[1]: Created slice kubepods-burstable-pod4497aabf_19b8_4111_a199_50d6361d00e3.slice - libcontainer container kubepods-burstable-pod4497aabf_19b8_4111_a199_50d6361d00e3.slice. Sep 13 01:57:26.653323 systemd[1]: Created slice kubepods-besteffort-podcc69bc79_4f97_4def_9cfb_3dbc4bb33d44.slice - libcontainer container kubepods-besteffort-podcc69bc79_4f97_4def_9cfb_3dbc4bb33d44.slice. Sep 13 01:57:26.664853 systemd[1]: Created slice kubepods-besteffort-pod0c8637e6_080f_42e2_bed2_e192627db354.slice - libcontainer container kubepods-besteffort-pod0c8637e6_080f_42e2_bed2_e192627db354.slice. Sep 13 01:57:26.678824 systemd[1]: Created slice kubepods-burstable-pod559e1cf7_31cd_4748_a7af_9a9c681ae085.slice - libcontainer container kubepods-burstable-pod559e1cf7_31cd_4748_a7af_9a9c681ae085.slice. Sep 13 01:57:26.684340 kubelet[2683]: W0913 01:57:26.684281 2683 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:srv-vx6h6.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object Sep 13 01:57:26.687268 kubelet[2683]: W0913 01:57:26.686922 2683 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:srv-vx6h6.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object Sep 13 01:57:26.687268 kubelet[2683]: E0913 01:57:26.687107 2683 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:srv-vx6h6.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 13 01:57:26.687268 kubelet[2683]: W0913 01:57:26.687234 2683 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:srv-vx6h6.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object Sep 13 01:57:26.687438 kubelet[2683]: E0913 01:57:26.687263 2683 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:srv-vx6h6.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 13 01:57:26.687616 kubelet[2683]: E0913 01:57:26.687578 2683 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:srv-vx6h6.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 13 01:57:26.711283 containerd[1514]: time="2025-09-13T01:57:26.711234158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 01:57:26.712720 kubelet[2683]: W0913 01:57:26.712447 2683 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-vx6h6.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object Sep 13 01:57:26.712720 kubelet[2683]: E0913 01:57:26.712502 2683 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-vx6h6.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 13 01:57:26.712720 kubelet[2683]: W0913 01:57:26.712663 2683 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-vx6h6.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object Sep 13 01:57:26.712720 kubelet[2683]: E0913 01:57:26.712691 2683 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-vx6h6.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-vx6h6.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 13 01:57:26.721696 systemd[1]: Created slice kubepods-besteffort-podbeece634_0084_4df9_841c_840e1e607f03.slice - libcontainer container kubepods-besteffort-podbeece634_0084_4df9_841c_840e1e607f03.slice. Sep 13 01:57:26.734731 systemd[1]: Created slice kubepods-besteffort-podaeca7b13_a3ef_47c3_b05a_b2c1fcfb4e19.slice - libcontainer container kubepods-besteffort-podaeca7b13_a3ef_47c3_b05a_b2c1fcfb4e19.slice. Sep 13 01:57:26.762934 systemd[1]: Created slice kubepods-besteffort-poda261ccdc_2511_4035_b432_81033de1ca03.slice - libcontainer container kubepods-besteffort-poda261ccdc_2511_4035_b432_81033de1ca03.slice. Sep 13 01:57:26.777234 kubelet[2683]: I0913 01:57:26.774511 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0c8637e6-080f-42e2-bed2-e192627db354-goldmane-key-pair\") pod \"goldmane-7988f88666-wjgs4\" (UID: \"0c8637e6-080f-42e2-bed2-e192627db354\") " pod="calico-system/goldmane-7988f88666-wjgs4" Sep 13 01:57:26.777234 kubelet[2683]: I0913 01:57:26.774720 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc69bc79-4f97-4def-9cfb-3dbc4bb33d44-tigera-ca-bundle\") pod \"calico-kube-controllers-5c587bb5c5-cqcsm\" (UID: \"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44\") " pod="calico-system/calico-kube-controllers-5c587bb5c5-cqcsm" Sep 13 01:57:26.777234 kubelet[2683]: I0913 01:57:26.774778 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkrb\" (UniqueName: \"kubernetes.io/projected/4497aabf-19b8-4111-a199-50d6361d00e3-kube-api-access-lqkrb\") pod \"coredns-7c65d6cfc9-pmxvt\" (UID: \"4497aabf-19b8-4111-a199-50d6361d00e3\") " pod="kube-system/coredns-7c65d6cfc9-pmxvt" Sep 13 01:57:26.777234 kubelet[2683]: I0913 01:57:26.774835 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8637e6-080f-42e2-bed2-e192627db354-config\") pod \"goldmane-7988f88666-wjgs4\" (UID: \"0c8637e6-080f-42e2-bed2-e192627db354\") " pod="calico-system/goldmane-7988f88666-wjgs4" Sep 13 01:57:26.777234 kubelet[2683]: I0913 01:57:26.774873 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c8637e6-080f-42e2-bed2-e192627db354-goldmane-ca-bundle\") pod \"goldmane-7988f88666-wjgs4\" (UID: \"0c8637e6-080f-42e2-bed2-e192627db354\") " pod="calico-system/goldmane-7988f88666-wjgs4" Sep 13 01:57:26.777597 kubelet[2683]: I0913 01:57:26.774905 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqbk\" (UniqueName: \"kubernetes.io/projected/0c8637e6-080f-42e2-bed2-e192627db354-kube-api-access-xvqbk\") pod \"goldmane-7988f88666-wjgs4\" (UID: \"0c8637e6-080f-42e2-bed2-e192627db354\") " pod="calico-system/goldmane-7988f88666-wjgs4" Sep 13 01:57:26.777597 kubelet[2683]: I0913 01:57:26.774939 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4497aabf-19b8-4111-a199-50d6361d00e3-config-volume\") pod \"coredns-7c65d6cfc9-pmxvt\" (UID: \"4497aabf-19b8-4111-a199-50d6361d00e3\") " pod="kube-system/coredns-7c65d6cfc9-pmxvt" Sep 13 01:57:26.777597 kubelet[2683]: I0913 01:57:26.774972 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/559e1cf7-31cd-4748-a7af-9a9c681ae085-config-volume\") pod \"coredns-7c65d6cfc9-68x4n\" (UID: \"559e1cf7-31cd-4748-a7af-9a9c681ae085\") " pod="kube-system/coredns-7c65d6cfc9-68x4n" Sep 13 01:57:26.777597 kubelet[2683]: I0913 01:57:26.775005 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9vbj\" (UniqueName: \"kubernetes.io/projected/559e1cf7-31cd-4748-a7af-9a9c681ae085-kube-api-access-x9vbj\") pod \"coredns-7c65d6cfc9-68x4n\" (UID: \"559e1cf7-31cd-4748-a7af-9a9c681ae085\") " pod="kube-system/coredns-7c65d6cfc9-68x4n" Sep 13 01:57:26.777597 kubelet[2683]: I0913 01:57:26.775064 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fznvq\" (UniqueName: \"kubernetes.io/projected/cc69bc79-4f97-4def-9cfb-3dbc4bb33d44-kube-api-access-fznvq\") pod \"calico-kube-controllers-5c587bb5c5-cqcsm\" (UID: \"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44\") " pod="calico-system/calico-kube-controllers-5c587bb5c5-cqcsm" Sep 13 01:57:26.875839 kubelet[2683]: I0913 01:57:26.875759 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19-calico-apiserver-certs\") pod \"calico-apiserver-845b7845d-jhdbm\" (UID: \"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19\") " pod="calico-apiserver/calico-apiserver-845b7845d-jhdbm" Sep 13 01:57:26.876152 kubelet[2683]: I0913 01:57:26.876127 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtv6s\" (UniqueName: \"kubernetes.io/projected/beece634-0084-4df9-841c-840e1e607f03-kube-api-access-xtv6s\") pod \"calico-apiserver-845b7845d-f9ljs\" (UID: \"beece634-0084-4df9-841c-840e1e607f03\") " pod="calico-apiserver/calico-apiserver-845b7845d-f9ljs" Sep 13 01:57:26.876394 kubelet[2683]: I0913 01:57:26.876370 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkrsc\" (UniqueName: \"kubernetes.io/projected/a261ccdc-2511-4035-b432-81033de1ca03-kube-api-access-zkrsc\") pod \"whisker-5dd9c997dd-pv5mw\" (UID: \"a261ccdc-2511-4035-b432-81033de1ca03\") " pod="calico-system/whisker-5dd9c997dd-pv5mw" Sep 13 01:57:26.876616 kubelet[2683]: I0913 01:57:26.876589 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cldq\" (UniqueName: \"kubernetes.io/projected/aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19-kube-api-access-6cldq\") pod \"calico-apiserver-845b7845d-jhdbm\" (UID: \"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19\") " pod="calico-apiserver/calico-apiserver-845b7845d-jhdbm" Sep 13 01:57:26.876788 kubelet[2683]: I0913 01:57:26.876764 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a261ccdc-2511-4035-b432-81033de1ca03-whisker-backend-key-pair\") pod \"whisker-5dd9c997dd-pv5mw\" (UID: \"a261ccdc-2511-4035-b432-81033de1ca03\") " pod="calico-system/whisker-5dd9c997dd-pv5mw" Sep 13 01:57:26.876936 kubelet[2683]: I0913 01:57:26.876913 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a261ccdc-2511-4035-b432-81033de1ca03-whisker-ca-bundle\") pod \"whisker-5dd9c997dd-pv5mw\" (UID: \"a261ccdc-2511-4035-b432-81033de1ca03\") " pod="calico-system/whisker-5dd9c997dd-pv5mw" Sep 13 01:57:26.877067 kubelet[2683]: I0913 01:57:26.877038 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/beece634-0084-4df9-841c-840e1e607f03-calico-apiserver-certs\") pod \"calico-apiserver-845b7845d-f9ljs\" (UID: \"beece634-0084-4df9-841c-840e1e607f03\") " pod="calico-apiserver/calico-apiserver-845b7845d-f9ljs" Sep 13 01:57:26.942757 containerd[1514]: time="2025-09-13T01:57:26.942609686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pmxvt,Uid:4497aabf-19b8-4111-a199-50d6361d00e3,Namespace:kube-system,Attempt:0,}" Sep 13 01:57:26.962343 containerd[1514]: time="2025-09-13T01:57:26.962275850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c587bb5c5-cqcsm,Uid:cc69bc79-4f97-4def-9cfb-3dbc4bb33d44,Namespace:calico-system,Attempt:0,}" Sep 13 01:57:26.992494 containerd[1514]: time="2025-09-13T01:57:26.992434080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68x4n,Uid:559e1cf7-31cd-4748-a7af-9a9c681ae085,Namespace:kube-system,Attempt:0,}" Sep 13 01:57:27.091665 containerd[1514]: time="2025-09-13T01:57:27.090841340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd9c997dd-pv5mw,Uid:a261ccdc-2511-4035-b432-81033de1ca03,Namespace:calico-system,Attempt:0,}" Sep 13 01:57:27.401835 containerd[1514]: time="2025-09-13T01:57:27.401656862Z" level=error msg="Failed to destroy network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.403434 containerd[1514]: time="2025-09-13T01:57:27.403275020Z" level=error msg="encountered an error cleaning up failed sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.403986 containerd[1514]: time="2025-09-13T01:57:27.403625278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68x4n,Uid:559e1cf7-31cd-4748-a7af-9a9c681ae085,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.404765 containerd[1514]: time="2025-09-13T01:57:27.404708915Z" level=error msg="Failed to destroy network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.406228 containerd[1514]: time="2025-09-13T01:57:27.405109637Z" level=error msg="encountered an error cleaning up failed sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.406228 containerd[1514]: time="2025-09-13T01:57:27.405173749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c587bb5c5-cqcsm,Uid:cc69bc79-4f97-4def-9cfb-3dbc4bb33d44,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.411333 kubelet[2683]: E0913 01:57:27.410544 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.411333 kubelet[2683]: E0913 01:57:27.410757 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c587bb5c5-cqcsm" Sep 13 01:57:27.411333 kubelet[2683]: E0913 01:57:27.410818 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c587bb5c5-cqcsm" Sep 13 01:57:27.413049 kubelet[2683]: E0913 01:57:27.410922 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c587bb5c5-cqcsm_calico-system(cc69bc79-4f97-4def-9cfb-3dbc4bb33d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c587bb5c5-cqcsm_calico-system(cc69bc79-4f97-4def-9cfb-3dbc4bb33d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c587bb5c5-cqcsm" podUID="cc69bc79-4f97-4def-9cfb-3dbc4bb33d44" Sep 13 01:57:27.413049 kubelet[2683]: E0913 01:57:27.411083 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.413049 kubelet[2683]: E0913 01:57:27.411139 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-68x4n" Sep 13 01:57:27.413278 containerd[1514]: time="2025-09-13T01:57:27.411835474Z" level=error msg="Failed to destroy network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.413278 containerd[1514]: time="2025-09-13T01:57:27.412593922Z" level=error msg="encountered an error cleaning up failed sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.413278 containerd[1514]: time="2025-09-13T01:57:27.412685238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dd9c997dd-pv5mw,Uid:a261ccdc-2511-4035-b432-81033de1ca03,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.413493 kubelet[2683]: E0913 01:57:27.411164 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-68x4n" Sep 13 01:57:27.413493 kubelet[2683]: E0913 01:57:27.411226 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-68x4n_kube-system(559e1cf7-31cd-4748-a7af-9a9c681ae085)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-68x4n_kube-system(559e1cf7-31cd-4748-a7af-9a9c681ae085)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-68x4n" podUID="559e1cf7-31cd-4748-a7af-9a9c681ae085" Sep 13 01:57:27.413493 kubelet[2683]: E0913 01:57:27.413005 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.413684 kubelet[2683]: E0913 01:57:27.413075 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd9c997dd-pv5mw" Sep 13 01:57:27.413684 kubelet[2683]: E0913 01:57:27.413100 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dd9c997dd-pv5mw" Sep 13 01:57:27.413684 kubelet[2683]: E0913 01:57:27.413166 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dd9c997dd-pv5mw_calico-system(a261ccdc-2511-4035-b432-81033de1ca03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dd9c997dd-pv5mw_calico-system(a261ccdc-2511-4035-b432-81033de1ca03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd9c997dd-pv5mw" podUID="a261ccdc-2511-4035-b432-81033de1ca03" Sep 13 01:57:27.420642 containerd[1514]: time="2025-09-13T01:57:27.418689580Z" level=error msg="Failed to destroy network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.420642 containerd[1514]: time="2025-09-13T01:57:27.420032319Z" level=error msg="encountered an error cleaning up failed sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.420642 containerd[1514]: time="2025-09-13T01:57:27.420107014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pmxvt,Uid:4497aabf-19b8-4111-a199-50d6361d00e3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.420940 kubelet[2683]: E0913 01:57:27.420407 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.420940 kubelet[2683]: E0913 01:57:27.420457 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pmxvt" Sep 13 01:57:27.420940 kubelet[2683]: E0913 01:57:27.420481 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pmxvt" Sep 13 01:57:27.421098 kubelet[2683]: E0913 01:57:27.420529 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pmxvt_kube-system(4497aabf-19b8-4111-a199-50d6361d00e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pmxvt_kube-system(4497aabf-19b8-4111-a199-50d6361d00e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pmxvt" podUID="4497aabf-19b8-4111-a199-50d6361d00e3" Sep 13 01:57:27.525726 systemd[1]: Created slice kubepods-besteffort-pod96f62ef8_50ce_46be_8601_56da0c0ae5a1.slice - libcontainer container kubepods-besteffort-pod96f62ef8_50ce_46be_8601_56da0c0ae5a1.slice. Sep 13 01:57:27.530134 containerd[1514]: time="2025-09-13T01:57:27.529691704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8zdz,Uid:96f62ef8-50ce-46be-8601-56da0c0ae5a1,Namespace:calico-system,Attempt:0,}" Sep 13 01:57:27.624091 containerd[1514]: time="2025-09-13T01:57:27.624028320Z" level=error msg="Failed to destroy network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.624670 containerd[1514]: time="2025-09-13T01:57:27.624498064Z" level=error msg="encountered an error cleaning up failed sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.624670 containerd[1514]: time="2025-09-13T01:57:27.624571071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8zdz,Uid:96f62ef8-50ce-46be-8601-56da0c0ae5a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.627086 kubelet[2683]: E0913 01:57:27.624882 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.627086 kubelet[2683]: E0913 01:57:27.624958 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8zdz" Sep 13 01:57:27.627086 kubelet[2683]: E0913 01:57:27.625053 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z8zdz" Sep 13 01:57:27.627417 kubelet[2683]: E0913 01:57:27.625137 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z8zdz_calico-system(96f62ef8-50ce-46be-8601-56da0c0ae5a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z8zdz_calico-system(96f62ef8-50ce-46be-8601-56da0c0ae5a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:27.627920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2-shm.mount: Deactivated successfully. Sep 13 01:57:27.712522 kubelet[2683]: I0913 01:57:27.711836 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:27.716923 kubelet[2683]: I0913 01:57:27.716045 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:27.758596 kubelet[2683]: I0913 01:57:27.758295 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:27.764447 kubelet[2683]: I0913 01:57:27.763110 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:27.767695 kubelet[2683]: I0913 01:57:27.767668 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:27.804288 containerd[1514]: time="2025-09-13T01:57:27.804143860Z" level=info msg="StopPodSandbox for \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\"" Sep 13 01:57:27.806280 containerd[1514]: time="2025-09-13T01:57:27.804973708Z" level=info msg="StopPodSandbox for \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\"" Sep 13 01:57:27.811803 containerd[1514]: time="2025-09-13T01:57:27.811460493Z" level=info msg="StopPodSandbox for \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\"" Sep 13 01:57:27.811803 containerd[1514]: time="2025-09-13T01:57:27.811510341Z" level=info msg="StopPodSandbox for \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\"" Sep 13 01:57:27.811803 containerd[1514]: time="2025-09-13T01:57:27.811729087Z" level=info msg="Ensure that sandbox 8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814 in task-service has been cleanup successfully" Sep 13 01:57:27.811803 containerd[1514]: time="2025-09-13T01:57:27.811776996Z" level=info msg="Ensure that sandbox cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00 in task-service has been cleanup successfully" Sep 13 01:57:27.815305 containerd[1514]: time="2025-09-13T01:57:27.811737527Z" level=info msg="Ensure that sandbox 67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f in task-service has been cleanup successfully" Sep 13 01:57:27.815305 containerd[1514]: time="2025-09-13T01:57:27.815045442Z" level=info msg="Ensure that sandbox ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2 in task-service has been cleanup successfully" Sep 13 01:57:27.818696 containerd[1514]: time="2025-09-13T01:57:27.818642176Z" level=info msg="StopPodSandbox for \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\"" Sep 13 01:57:27.819458 containerd[1514]: time="2025-09-13T01:57:27.819126858Z" level=info msg="Ensure that sandbox efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449 in task-service has been cleanup successfully" Sep 13 01:57:27.879208 kubelet[2683]: E0913 01:57:27.879125 2683 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:27.879398 kubelet[2683]: E0913 01:57:27.879305 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c8637e6-080f-42e2-bed2-e192627db354-config podName:0c8637e6-080f-42e2-bed2-e192627db354 nodeName:}" failed. No retries permitted until 2025-09-13 01:57:28.379261356 +0000 UTC m=+35.064672397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0c8637e6-080f-42e2-bed2-e192627db354-config") pod "goldmane-7988f88666-wjgs4" (UID: "0c8637e6-080f-42e2-bed2-e192627db354") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:27.879828 kubelet[2683]: E0913 01:57:27.879523 2683 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:27.879828 kubelet[2683]: E0913 01:57:27.879580 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c8637e6-080f-42e2-bed2-e192627db354-goldmane-ca-bundle podName:0c8637e6-080f-42e2-bed2-e192627db354 nodeName:}" failed. No retries permitted until 2025-09-13 01:57:28.379565223 +0000 UTC m=+35.064976256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/0c8637e6-080f-42e2-bed2-e192627db354-goldmane-ca-bundle") pod "goldmane-7988f88666-wjgs4" (UID: "0c8637e6-080f-42e2-bed2-e192627db354") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:27.964795 containerd[1514]: time="2025-09-13T01:57:27.964113596Z" level=error msg="StopPodSandbox for \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\" failed" error="failed to destroy network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.966119 kubelet[2683]: E0913 01:57:27.964507 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:27.966119 kubelet[2683]: E0913 01:57:27.964613 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f"} Sep 13 01:57:27.966119 kubelet[2683]: E0913 01:57:27.965969 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"559e1cf7-31cd-4748-a7af-9a9c681ae085\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:27.966119 kubelet[2683]: E0913 01:57:27.966009 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"559e1cf7-31cd-4748-a7af-9a9c681ae085\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-68x4n" podUID="559e1cf7-31cd-4748-a7af-9a9c681ae085" Sep 13 01:57:27.967127 kubelet[2683]: E0913 01:57:27.966558 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:27.967127 kubelet[2683]: E0913 01:57:27.966625 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2"} Sep 13 01:57:27.967127 kubelet[2683]: E0913 01:57:27.966657 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96f62ef8-50ce-46be-8601-56da0c0ae5a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:27.967127 kubelet[2683]: E0913 01:57:27.966683 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96f62ef8-50ce-46be-8601-56da0c0ae5a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z8zdz" podUID="96f62ef8-50ce-46be-8601-56da0c0ae5a1" Sep 13 01:57:27.967474 containerd[1514]: time="2025-09-13T01:57:27.966315736Z" level=error msg="StopPodSandbox for \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\" failed" error="failed to destroy network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.967554 containerd[1514]: time="2025-09-13T01:57:27.967516419Z" level=error msg="StopPodSandbox for \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\" failed" error="failed to destroy network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.967930 kubelet[2683]: E0913 01:57:27.967754 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:27.967930 kubelet[2683]: E0913 01:57:27.967793 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00"} Sep 13 01:57:27.967930 kubelet[2683]: E0913 01:57:27.967852 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4497aabf-19b8-4111-a199-50d6361d00e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:27.967930 kubelet[2683]: E0913 01:57:27.967878 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4497aabf-19b8-4111-a199-50d6361d00e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pmxvt" podUID="4497aabf-19b8-4111-a199-50d6361d00e3" Sep 13 01:57:27.971867 containerd[1514]: time="2025-09-13T01:57:27.971652669Z" level=error msg="StopPodSandbox for \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\" failed" error="failed to destroy network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.972283 kubelet[2683]: E0913 01:57:27.971841 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:27.972283 kubelet[2683]: E0913 01:57:27.972074 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814"} Sep 13 01:57:27.972283 kubelet[2683]: E0913 01:57:27.972112 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:27.972283 kubelet[2683]: E0913 01:57:27.972139 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c587bb5c5-cqcsm" podUID="cc69bc79-4f97-4def-9cfb-3dbc4bb33d44" Sep 13 01:57:27.978793 containerd[1514]: time="2025-09-13T01:57:27.978385583Z" level=error msg="StopPodSandbox for \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\" failed" error="failed to destroy network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:27.978926 kubelet[2683]: E0913 01:57:27.978607 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:27.978926 kubelet[2683]: E0913 01:57:27.978660 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449"} Sep 13 01:57:27.978926 kubelet[2683]: E0913 01:57:27.978705 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a261ccdc-2511-4035-b432-81033de1ca03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:27.978926 kubelet[2683]: E0913 01:57:27.978735 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a261ccdc-2511-4035-b432-81033de1ca03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dd9c997dd-pv5mw" podUID="a261ccdc-2511-4035-b432-81033de1ca03" Sep 13 01:57:27.984155 kubelet[2683]: E0913 01:57:27.983753 2683 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 13 01:57:27.984155 kubelet[2683]: E0913 01:57:27.983879 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/beece634-0084-4df9-841c-840e1e607f03-calico-apiserver-certs podName:beece634-0084-4df9-841c-840e1e607f03 nodeName:}" failed. No retries permitted until 2025-09-13 01:57:28.483856744 +0000 UTC m=+35.169267777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/beece634-0084-4df9-841c-840e1e607f03-calico-apiserver-certs") pod "calico-apiserver-845b7845d-f9ljs" (UID: "beece634-0084-4df9-841c-840e1e607f03") : failed to sync secret cache: timed out waiting for the condition Sep 13 01:57:27.985041 kubelet[2683]: E0913 01:57:27.985016 2683 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 13 01:57:27.985332 kubelet[2683]: E0913 01:57:27.985275 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19-calico-apiserver-certs podName:aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19 nodeName:}" failed. No retries permitted until 2025-09-13 01:57:28.4852568 +0000 UTC m=+35.170667840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19-calico-apiserver-certs") pod "calico-apiserver-845b7845d-jhdbm" (UID: "aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19") : failed to sync secret cache: timed out waiting for the condition Sep 13 01:57:28.030836 kubelet[2683]: E0913 01:57:28.030293 2683 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:28.030836 kubelet[2683]: E0913 01:57:28.030351 2683 projected.go:194] Error preparing data for projected volume kube-api-access-6cldq for pod calico-apiserver/calico-apiserver-845b7845d-jhdbm: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:28.030836 kubelet[2683]: E0913 01:57:28.030435 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19-kube-api-access-6cldq podName:aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19 nodeName:}" failed. No retries permitted until 2025-09-13 01:57:28.53041333 +0000 UTC m=+35.215824375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6cldq" (UniqueName: "kubernetes.io/projected/aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19-kube-api-access-6cldq") pod "calico-apiserver-845b7845d-jhdbm" (UID: "aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:28.032133 kubelet[2683]: E0913 01:57:28.032105 2683 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:28.032133 kubelet[2683]: E0913 01:57:28.032140 2683 projected.go:194] Error preparing data for projected volume kube-api-access-xtv6s for pod calico-apiserver/calico-apiserver-845b7845d-f9ljs: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:28.032312 kubelet[2683]: E0913 01:57:28.032202 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/beece634-0084-4df9-841c-840e1e607f03-kube-api-access-xtv6s podName:beece634-0084-4df9-841c-840e1e607f03 nodeName:}" failed. No retries permitted until 2025-09-13 01:57:28.532172597 +0000 UTC m=+35.217583630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtv6s" (UniqueName: "kubernetes.io/projected/beece634-0084-4df9-841c-840e1e607f03-kube-api-access-xtv6s") pod "calico-apiserver-845b7845d-f9ljs" (UID: "beece634-0084-4df9-841c-840e1e607f03") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:57:28.474075 containerd[1514]: time="2025-09-13T01:57:28.473996627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wjgs4,Uid:0c8637e6-080f-42e2-bed2-e192627db354,Namespace:calico-system,Attempt:0,}" Sep 13 01:57:28.607656 containerd[1514]: time="2025-09-13T01:57:28.607491824Z" level=error msg="Failed to destroy network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.609936 containerd[1514]: time="2025-09-13T01:57:28.608108532Z" level=error msg="encountered an error cleaning up failed sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.609936 containerd[1514]: time="2025-09-13T01:57:28.608181350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wjgs4,Uid:0c8637e6-080f-42e2-bed2-e192627db354,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.610178 kubelet[2683]: E0913 01:57:28.610142 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.610369 kubelet[2683]: E0913 01:57:28.610329 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-wjgs4" Sep 13 01:57:28.610484 kubelet[2683]: E0913 01:57:28.610458 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-wjgs4" Sep 13 01:57:28.613331 kubelet[2683]: E0913 01:57:28.613286 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-wjgs4_calico-system(0c8637e6-080f-42e2-bed2-e192627db354)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-wjgs4_calico-system(0c8637e6-080f-42e2-bed2-e192627db354)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-wjgs4" podUID="0c8637e6-080f-42e2-bed2-e192627db354" Sep 13 01:57:28.773384 kubelet[2683]: I0913 01:57:28.773256 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:28.774504 containerd[1514]: time="2025-09-13T01:57:28.774390323Z" level=info msg="StopPodSandbox for \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\"" Sep 13 01:57:28.774850 containerd[1514]: time="2025-09-13T01:57:28.774636656Z" level=info msg="Ensure that sandbox 56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95 in task-service has been cleanup successfully" Sep 13 01:57:28.816229 containerd[1514]: time="2025-09-13T01:57:28.814396862Z" level=error msg="StopPodSandbox for \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\" failed" error="failed to destroy network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.817177 kubelet[2683]: E0913 01:57:28.817111 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:28.817341 kubelet[2683]: E0913 01:57:28.817187 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95"} Sep 13 01:57:28.817341 kubelet[2683]: E0913 01:57:28.817258 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0c8637e6-080f-42e2-bed2-e192627db354\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:28.817566 kubelet[2683]: E0913 01:57:28.817336 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0c8637e6-080f-42e2-bed2-e192627db354\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-wjgs4" podUID="0c8637e6-080f-42e2-bed2-e192627db354" Sep 13 01:57:28.831592 containerd[1514]: time="2025-09-13T01:57:28.831558342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-f9ljs,Uid:beece634-0084-4df9-841c-840e1e607f03,Namespace:calico-apiserver,Attempt:0,}" Sep 13 01:57:28.880337 containerd[1514]: time="2025-09-13T01:57:28.880289498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-jhdbm,Uid:aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19,Namespace:calico-apiserver,Attempt:0,}" Sep 13 01:57:28.946456 containerd[1514]: time="2025-09-13T01:57:28.946388906Z" level=error msg="Failed to destroy network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.947249 containerd[1514]: time="2025-09-13T01:57:28.947135738Z" level=error msg="encountered an error cleaning up failed sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.947489 containerd[1514]: time="2025-09-13T01:57:28.947445846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-f9ljs,Uid:beece634-0084-4df9-841c-840e1e607f03,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.947929 kubelet[2683]: E0913 01:57:28.947858 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:28.948059 kubelet[2683]: E0913 01:57:28.947966 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845b7845d-f9ljs" Sep 13 01:57:28.948059 kubelet[2683]: E0913 01:57:28.947996 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845b7845d-f9ljs" Sep 13 01:57:28.948437 kubelet[2683]: E0913 01:57:28.948066 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-845b7845d-f9ljs_calico-apiserver(beece634-0084-4df9-841c-840e1e607f03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-845b7845d-f9ljs_calico-apiserver(beece634-0084-4df9-841c-840e1e607f03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-845b7845d-f9ljs" podUID="beece634-0084-4df9-841c-840e1e607f03" Sep 13 01:57:29.030857 containerd[1514]: time="2025-09-13T01:57:29.030604715Z" level=error msg="Failed to destroy network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:29.032680 containerd[1514]: time="2025-09-13T01:57:29.032482050Z" level=error msg="encountered an error cleaning up failed sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:29.032680 containerd[1514]: time="2025-09-13T01:57:29.032553050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-jhdbm,Uid:aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:29.033110 kubelet[2683]: E0913 01:57:29.033043 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:29.033267 kubelet[2683]: E0913 01:57:29.033169 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845b7845d-jhdbm" Sep 13 01:57:29.033332 kubelet[2683]: E0913 01:57:29.033256 2683 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-845b7845d-jhdbm" Sep 13 01:57:29.033419 kubelet[2683]: E0913 01:57:29.033382 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-845b7845d-jhdbm_calico-apiserver(aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-845b7845d-jhdbm_calico-apiserver(aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-845b7845d-jhdbm" podUID="aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19" Sep 13 01:57:29.493685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95-shm.mount: Deactivated successfully. Sep 13 01:57:29.780094 kubelet[2683]: I0913 01:57:29.779263 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:29.781662 containerd[1514]: time="2025-09-13T01:57:29.781003929Z" level=info msg="StopPodSandbox for \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\"" Sep 13 01:57:29.781662 containerd[1514]: time="2025-09-13T01:57:29.781285716Z" level=info msg="Ensure that sandbox 16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa in task-service has been cleanup successfully" Sep 13 01:57:29.786810 kubelet[2683]: I0913 01:57:29.786527 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:29.788387 containerd[1514]: time="2025-09-13T01:57:29.788119506Z" level=info msg="StopPodSandbox for \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\"" Sep 13 01:57:29.788930 containerd[1514]: time="2025-09-13T01:57:29.788868628Z" level=info msg="Ensure that sandbox d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226 in task-service has been cleanup successfully" Sep 13 01:57:29.845568 containerd[1514]: time="2025-09-13T01:57:29.845499770Z" level=error msg="StopPodSandbox for \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\" failed" error="failed to destroy network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:29.847245 kubelet[2683]: E0913 01:57:29.846249 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:29.847245 kubelet[2683]: E0913 01:57:29.846323 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa"} Sep 13 01:57:29.847245 kubelet[2683]: E0913 01:57:29.846389 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"beece634-0084-4df9-841c-840e1e607f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:29.847245 kubelet[2683]: E0913 01:57:29.846475 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"beece634-0084-4df9-841c-840e1e607f03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-845b7845d-f9ljs" podUID="beece634-0084-4df9-841c-840e1e607f03" Sep 13 01:57:29.862836 containerd[1514]: time="2025-09-13T01:57:29.862775331Z" level=error msg="StopPodSandbox for \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\" failed" error="failed to destroy network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 01:57:29.863419 kubelet[2683]: E0913 01:57:29.863211 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:29.863419 kubelet[2683]: E0913 01:57:29.863286 2683 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226"} Sep 13 01:57:29.863419 kubelet[2683]: E0913 01:57:29.863328 2683 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 01:57:29.863419 kubelet[2683]: E0913 01:57:29.863372 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-845b7845d-jhdbm" podUID="aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19" Sep 13 01:57:36.673278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817097501.mount: Deactivated successfully. Sep 13 01:57:36.771130 containerd[1514]: time="2025-09-13T01:57:36.768402588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:36.771130 containerd[1514]: time="2025-09-13T01:57:36.749350934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 01:57:36.812858 containerd[1514]: time="2025-09-13T01:57:36.812428629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.099404118s" Sep 13 01:57:36.812858 containerd[1514]: time="2025-09-13T01:57:36.812653721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 01:57:36.814911 containerd[1514]: time="2025-09-13T01:57:36.814860031Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:36.818432 containerd[1514]: time="2025-09-13T01:57:36.817663618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:36.849566 containerd[1514]: time="2025-09-13T01:57:36.849509165Z" level=info msg="CreateContainer within sandbox \"8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 01:57:36.895968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751227898.mount: Deactivated successfully. Sep 13 01:57:36.911473 containerd[1514]: time="2025-09-13T01:57:36.911404359Z" level=info msg="CreateContainer within sandbox \"8feebb1306fb2defc8c201a5fb67fa35c8a9d960684e8a0096a6b0d5d8816498\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d99df2c38b41a1bc94d79ecf4d7296b727cda229eba3d9d1a9d999daba0736cf\"" Sep 13 01:57:36.913042 containerd[1514]: time="2025-09-13T01:57:36.912937310Z" level=info msg="StartContainer for \"d99df2c38b41a1bc94d79ecf4d7296b727cda229eba3d9d1a9d999daba0736cf\"" Sep 13 01:57:37.133440 systemd[1]: Started cri-containerd-d99df2c38b41a1bc94d79ecf4d7296b727cda229eba3d9d1a9d999daba0736cf.scope - libcontainer container d99df2c38b41a1bc94d79ecf4d7296b727cda229eba3d9d1a9d999daba0736cf. Sep 13 01:57:37.213366 containerd[1514]: time="2025-09-13T01:57:37.213154314Z" level=info msg="StartContainer for \"d99df2c38b41a1bc94d79ecf4d7296b727cda229eba3d9d1a9d999daba0736cf\" returns successfully" Sep 13 01:57:37.407462 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 01:57:37.408489 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 01:57:37.767104 containerd[1514]: time="2025-09-13T01:57:37.766902763Z" level=info msg="StopPodSandbox for \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\"" Sep 13 01:57:38.098662 kubelet[2683]: I0913 01:57:38.082303 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ftbch" podStartSLOduration=2.478159661 podStartE2EDuration="25.065957398s" podCreationTimestamp="2025-09-13 01:57:13 +0000 UTC" firstStartedPulling="2025-09-13 01:57:14.227644366 +0000 UTC m=+20.913055399" lastFinishedPulling="2025-09-13 01:57:36.815442098 +0000 UTC m=+43.500853136" observedRunningTime="2025-09-13 01:57:37.963353689 +0000 UTC m=+44.648764755" watchObservedRunningTime="2025-09-13 01:57:38.065957398 +0000 UTC m=+44.751368441" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.068 [INFO][3848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.071 [INFO][3848] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" iface="eth0" netns="/var/run/netns/cni-445ae0d3-2cea-ce29-7db2-10314b698c7a" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.072 [INFO][3848] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" iface="eth0" netns="/var/run/netns/cni-445ae0d3-2cea-ce29-7db2-10314b698c7a" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.074 [INFO][3848] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" iface="eth0" netns="/var/run/netns/cni-445ae0d3-2cea-ce29-7db2-10314b698c7a" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.074 [INFO][3848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.074 [INFO][3848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.419 [INFO][3855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.421 [INFO][3855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.421 [INFO][3855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.434 [WARNING][3855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.434 [INFO][3855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.437 [INFO][3855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:38.444407 containerd[1514]: 2025-09-13 01:57:38.439 [INFO][3848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:38.447009 systemd[1]: run-netns-cni\x2d445ae0d3\x2d2cea\x2dce29\x2d7db2\x2d10314b698c7a.mount: Deactivated successfully. Sep 13 01:57:38.452121 containerd[1514]: time="2025-09-13T01:57:38.452023190Z" level=info msg="TearDown network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\" successfully" Sep 13 01:57:38.452237 containerd[1514]: time="2025-09-13T01:57:38.452123480Z" level=info msg="StopPodSandbox for \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\" returns successfully" Sep 13 01:57:38.521553 containerd[1514]: time="2025-09-13T01:57:38.521503028Z" level=info msg="StopPodSandbox for \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\"" Sep 13 01:57:38.609367 kubelet[2683]: I0913 01:57:38.609216 2683 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a261ccdc-2511-4035-b432-81033de1ca03-whisker-backend-key-pair\") pod \"a261ccdc-2511-4035-b432-81033de1ca03\" (UID: \"a261ccdc-2511-4035-b432-81033de1ca03\") " Sep 13 01:57:38.613241 kubelet[2683]: I0913 01:57:38.613154 2683 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a261ccdc-2511-4035-b432-81033de1ca03-whisker-ca-bundle\") pod \"a261ccdc-2511-4035-b432-81033de1ca03\" (UID: \"a261ccdc-2511-4035-b432-81033de1ca03\") " Sep 13 01:57:38.613400 kubelet[2683]: I0913 01:57:38.613352 2683 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkrsc\" (UniqueName: \"kubernetes.io/projected/a261ccdc-2511-4035-b432-81033de1ca03-kube-api-access-zkrsc\") pod \"a261ccdc-2511-4035-b432-81033de1ca03\" (UID: \"a261ccdc-2511-4035-b432-81033de1ca03\") " Sep 13 01:57:38.629243 kubelet[2683]: I0913 01:57:38.627652 2683 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a261ccdc-2511-4035-b432-81033de1ca03-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a261ccdc-2511-4035-b432-81033de1ca03" (UID: "a261ccdc-2511-4035-b432-81033de1ca03"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 01:57:38.632418 kubelet[2683]: I0913 01:57:38.632373 2683 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a261ccdc-2511-4035-b432-81033de1ca03-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a261ccdc-2511-4035-b432-81033de1ca03" (UID: "a261ccdc-2511-4035-b432-81033de1ca03"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 01:57:38.634536 systemd[1]: var-lib-kubelet-pods-a261ccdc\x2d2511\x2d4035\x2db432\x2d81033de1ca03-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 01:57:38.634895 kubelet[2683]: I0913 01:57:38.634688 2683 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a261ccdc-2511-4035-b432-81033de1ca03-kube-api-access-zkrsc" (OuterVolumeSpecName: "kube-api-access-zkrsc") pod "a261ccdc-2511-4035-b432-81033de1ca03" (UID: "a261ccdc-2511-4035-b432-81033de1ca03"). InnerVolumeSpecName "kube-api-access-zkrsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 01:57:38.640315 systemd[1]: var-lib-kubelet-pods-a261ccdc\x2d2511\x2d4035\x2db432\x2d81033de1ca03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzkrsc.mount: Deactivated successfully. Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.598 [INFO][3903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.598 [INFO][3903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" iface="eth0" netns="/var/run/netns/cni-54514348-d497-5b7c-8c9e-9131197cedee" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.600 [INFO][3903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" iface="eth0" netns="/var/run/netns/cni-54514348-d497-5b7c-8c9e-9131197cedee" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.602 [INFO][3903] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" iface="eth0" netns="/var/run/netns/cni-54514348-d497-5b7c-8c9e-9131197cedee" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.602 [INFO][3903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.602 [INFO][3903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.653 [INFO][3910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.654 [INFO][3910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.654 [INFO][3910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.663 [WARNING][3910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.663 [INFO][3910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.665 [INFO][3910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:38.670604 containerd[1514]: 2025-09-13 01:57:38.668 [INFO][3903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:38.673976 containerd[1514]: time="2025-09-13T01:57:38.671299867Z" level=info msg="TearDown network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\" successfully" Sep 13 01:57:38.673976 containerd[1514]: time="2025-09-13T01:57:38.671345007Z" level=info msg="StopPodSandbox for \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\" returns successfully" Sep 13 01:57:38.676223 containerd[1514]: time="2025-09-13T01:57:38.675699839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68x4n,Uid:559e1cf7-31cd-4748-a7af-9a9c681ae085,Namespace:kube-system,Attempt:1,}" Sep 13 01:57:38.676184 systemd[1]: run-netns-cni\x2d54514348\x2dd497\x2d5b7c\x2d8c9e\x2d9131197cedee.mount: Deactivated successfully. Sep 13 01:57:38.715056 kubelet[2683]: I0913 01:57:38.713951 2683 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a261ccdc-2511-4035-b432-81033de1ca03-whisker-ca-bundle\") on node \"srv-vx6h6.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:57:38.715056 kubelet[2683]: I0913 01:57:38.714002 2683 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkrsc\" (UniqueName: \"kubernetes.io/projected/a261ccdc-2511-4035-b432-81033de1ca03-kube-api-access-zkrsc\") on node \"srv-vx6h6.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:57:38.715056 kubelet[2683]: I0913 01:57:38.714022 2683 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a261ccdc-2511-4035-b432-81033de1ca03-whisker-backend-key-pair\") on node \"srv-vx6h6.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:57:38.853689 systemd[1]: Removed slice kubepods-besteffort-poda261ccdc_2511_4035_b432_81033de1ca03.slice - libcontainer container kubepods-besteffort-poda261ccdc_2511_4035_b432_81033de1ca03.slice. Sep 13 01:57:38.894899 systemd[1]: run-containerd-runc-k8s.io-d99df2c38b41a1bc94d79ecf4d7296b727cda229eba3d9d1a9d999daba0736cf-runc.8P6R24.mount: Deactivated successfully. Sep 13 01:57:38.980786 systemd-networkd[1436]: calib083811551e: Link UP Sep 13 01:57:38.982847 systemd-networkd[1436]: calib083811551e: Gained carrier Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.752 [INFO][3920] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.768 [INFO][3920] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0 coredns-7c65d6cfc9- kube-system 559e1cf7-31cd-4748-a7af-9a9c681ae085 919 0 2025-09-13 01:56:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com coredns-7c65d6cfc9-68x4n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib083811551e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.768 [INFO][3920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.817 [INFO][3931] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" HandleID="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.817 [INFO][3931] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" HandleID="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-68x4n", "timestamp":"2025-09-13 01:57:38.817176714 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.817 [INFO][3931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.817 [INFO][3931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.817 [INFO][3931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.828 [INFO][3931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.841 [INFO][3931] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.858 [INFO][3931] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.863 [INFO][3931] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.869 [INFO][3931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.869 [INFO][3931] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.875 [INFO][3931] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934 Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.899 [INFO][3931] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.926 [INFO][3931] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.193/26] block=192.168.100.192/26 handle="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.926 [INFO][3931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.193/26] handle="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.926 [INFO][3931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:39.033067 containerd[1514]: 2025-09-13 01:57:38.926 [INFO][3931] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.193/26] IPv6=[] ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" HandleID="k8s-pod-network.736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:39.036861 containerd[1514]: 2025-09-13 01:57:38.933 [INFO][3920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"559e1cf7-31cd-4748-a7af-9a9c681ae085", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-68x4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib083811551e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:39.036861 containerd[1514]: 2025-09-13 01:57:38.933 [INFO][3920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.193/32] ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:39.036861 containerd[1514]: 2025-09-13 01:57:38.933 [INFO][3920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib083811551e ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:39.036861 containerd[1514]: 2025-09-13 01:57:38.968 [INFO][3920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:39.036861 containerd[1514]: 2025-09-13 01:57:38.969 [INFO][3920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"559e1cf7-31cd-4748-a7af-9a9c681ae085", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934", Pod:"coredns-7c65d6cfc9-68x4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib083811551e", MAC:"3a:0d:b3:19:6b:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:39.036861 containerd[1514]: 2025-09-13 01:57:39.027 [INFO][3920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934" Namespace="kube-system" Pod="coredns-7c65d6cfc9-68x4n" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:39.106855 systemd[1]: Created slice kubepods-besteffort-poda83f473c_8cc6_4e14_a06a_2b9faf3fe5ea.slice - libcontainer container kubepods-besteffort-poda83f473c_8cc6_4e14_a06a_2b9faf3fe5ea.slice. Sep 13 01:57:39.118562 kubelet[2683]: I0913 01:57:39.118475 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrggx\" (UniqueName: \"kubernetes.io/projected/a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea-kube-api-access-vrggx\") pod \"whisker-55489fb99-r5qhw\" (UID: \"a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea\") " pod="calico-system/whisker-55489fb99-r5qhw" Sep 13 01:57:39.118562 kubelet[2683]: I0913 01:57:39.118534 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea-whisker-backend-key-pair\") pod \"whisker-55489fb99-r5qhw\" (UID: \"a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea\") " pod="calico-system/whisker-55489fb99-r5qhw" Sep 13 01:57:39.119144 kubelet[2683]: I0913 01:57:39.118577 2683 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea-whisker-ca-bundle\") pod \"whisker-55489fb99-r5qhw\" (UID: \"a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea\") " pod="calico-system/whisker-55489fb99-r5qhw" Sep 13 01:57:39.120280 containerd[1514]: time="2025-09-13T01:57:39.120088036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:39.121993 containerd[1514]: time="2025-09-13T01:57:39.121731634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:39.122272 containerd[1514]: time="2025-09-13T01:57:39.121980670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:39.122969 containerd[1514]: time="2025-09-13T01:57:39.122606761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:39.169720 systemd[1]: Started cri-containerd-736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934.scope - libcontainer container 736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934. Sep 13 01:57:39.280265 containerd[1514]: time="2025-09-13T01:57:39.276689361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-68x4n,Uid:559e1cf7-31cd-4748-a7af-9a9c681ae085,Namespace:kube-system,Attempt:1,} returns sandbox id \"736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934\"" Sep 13 01:57:39.287990 containerd[1514]: time="2025-09-13T01:57:39.287950017Z" level=info msg="CreateContainer within sandbox \"736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:57:39.314169 containerd[1514]: time="2025-09-13T01:57:39.314105838Z" level=info msg="CreateContainer within sandbox \"736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e4916fa94296ba1428f11c506675b1dfba91d227dbec2687b64b84097f69b5f\"" Sep 13 01:57:39.315532 containerd[1514]: time="2025-09-13T01:57:39.315481318Z" level=info msg="StartContainer for \"7e4916fa94296ba1428f11c506675b1dfba91d227dbec2687b64b84097f69b5f\"" Sep 13 01:57:39.356472 systemd[1]: Started cri-containerd-7e4916fa94296ba1428f11c506675b1dfba91d227dbec2687b64b84097f69b5f.scope - libcontainer container 7e4916fa94296ba1428f11c506675b1dfba91d227dbec2687b64b84097f69b5f. Sep 13 01:57:39.418104 containerd[1514]: time="2025-09-13T01:57:39.417521953Z" level=info msg="StartContainer for \"7e4916fa94296ba1428f11c506675b1dfba91d227dbec2687b64b84097f69b5f\" returns successfully" Sep 13 01:57:39.423074 containerd[1514]: time="2025-09-13T01:57:39.422372657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55489fb99-r5qhw,Uid:a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea,Namespace:calico-system,Attempt:0,}" Sep 13 01:57:39.519609 containerd[1514]: time="2025-09-13T01:57:39.519552317Z" level=info msg="StopPodSandbox for \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\"" Sep 13 01:57:39.523717 containerd[1514]: time="2025-09-13T01:57:39.523673232Z" level=info msg="StopPodSandbox for \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\"" Sep 13 01:57:39.541669 kubelet[2683]: I0913 01:57:39.541095 2683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a261ccdc-2511-4035-b432-81033de1ca03" path="/var/lib/kubelet/pods/a261ccdc-2511-4035-b432-81033de1ca03/volumes" Sep 13 01:57:39.891280 kubelet[2683]: I0913 01:57:39.890352 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-68x4n" podStartSLOduration=41.890303927 podStartE2EDuration="41.890303927s" podCreationTimestamp="2025-09-13 01:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:57:39.885404377 +0000 UTC m=+46.570815440" watchObservedRunningTime="2025-09-13 01:57:39.890303927 +0000 UTC m=+46.575714965" Sep 13 01:57:39.890727 systemd-networkd[1436]: cali91b94c25c80: Link UP Sep 13 01:57:39.893790 systemd-networkd[1436]: cali91b94c25c80: Gained carrier Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.507 [INFO][4071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.554 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0 whisker-55489fb99- calico-system a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea 939 0 2025-09-13 01:57:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55489fb99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com whisker-55489fb99-r5qhw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali91b94c25c80 [] [] }} ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.554 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.748 [INFO][4134] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" HandleID="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.751 [INFO][4134] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" HandleID="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"whisker-55489fb99-r5qhw", "timestamp":"2025-09-13 01:57:39.748163538 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.751 [INFO][4134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.751 [INFO][4134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.751 [INFO][4134] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.794 [INFO][4134] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.806 [INFO][4134] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.815 [INFO][4134] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.824 [INFO][4134] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.829 [INFO][4134] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.829 [INFO][4134] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.833 [INFO][4134] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573 Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.841 [INFO][4134] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.859 [INFO][4134] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.194/26] block=192.168.100.192/26 handle="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.860 [INFO][4134] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.194/26] handle="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.860 [INFO][4134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:39.950017 containerd[1514]: 2025-09-13 01:57:39.860 [INFO][4134] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.194/26] IPv6=[] ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" HandleID="k8s-pod-network.2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" Sep 13 01:57:39.953688 containerd[1514]: 2025-09-13 01:57:39.879 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0", GenerateName:"whisker-55489fb99-", Namespace:"calico-system", SelfLink:"", UID:"a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55489fb99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"whisker-55489fb99-r5qhw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91b94c25c80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:39.953688 containerd[1514]: 2025-09-13 01:57:39.879 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.194/32] ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" Sep 13 01:57:39.953688 containerd[1514]: 2025-09-13 01:57:39.879 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91b94c25c80 ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" Sep 13 01:57:39.953688 containerd[1514]: 2025-09-13 01:57:39.896 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" Sep 13 01:57:39.953688 containerd[1514]: 2025-09-13 01:57:39.899 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0", GenerateName:"whisker-55489fb99-", Namespace:"calico-system", SelfLink:"", UID:"a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55489fb99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573", Pod:"whisker-55489fb99-r5qhw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91b94c25c80", MAC:"a2:6c:89:0f:7e:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:39.953688 containerd[1514]: 2025-09-13 01:57:39.940 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573" Namespace="calico-system" Pod="whisker-55489fb99-r5qhw" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--55489fb99--r5qhw-eth0" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.747 [INFO][4117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.750 [INFO][4117] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" iface="eth0" netns="/var/run/netns/cni-4983bc6a-8655-c19f-3a92-99abcbd2e9b5" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.751 [INFO][4117] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" iface="eth0" netns="/var/run/netns/cni-4983bc6a-8655-c19f-3a92-99abcbd2e9b5" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.753 [INFO][4117] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" iface="eth0" netns="/var/run/netns/cni-4983bc6a-8655-c19f-3a92-99abcbd2e9b5" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.753 [INFO][4117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.753 [INFO][4117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.931 [INFO][4158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.932 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.932 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.970 [WARNING][4158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.971 [INFO][4158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.987 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:39.999019 containerd[1514]: 2025-09-13 01:57:39.996 [INFO][4117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:40.003377 containerd[1514]: time="2025-09-13T01:57:40.003316128Z" level=info msg="TearDown network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\" successfully" Sep 13 01:57:40.003377 containerd[1514]: time="2025-09-13T01:57:40.003372736Z" level=info msg="StopPodSandbox for \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\" returns successfully" Sep 13 01:57:40.005861 systemd[1]: run-netns-cni\x2d4983bc6a\x2d8655\x2dc19f\x2d3a92\x2d99abcbd2e9b5.mount: Deactivated successfully. Sep 13 01:57:40.008840 containerd[1514]: time="2025-09-13T01:57:40.008796661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c587bb5c5-cqcsm,Uid:cc69bc79-4f97-4def-9cfb-3dbc4bb33d44,Namespace:calico-system,Attempt:1,}" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.778 [INFO][4128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.780 [INFO][4128] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" iface="eth0" netns="/var/run/netns/cni-c8725f61-3e58-b881-35e1-b739e2d955b3" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.782 [INFO][4128] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" iface="eth0" netns="/var/run/netns/cni-c8725f61-3e58-b881-35e1-b739e2d955b3" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.782 [INFO][4128] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" iface="eth0" netns="/var/run/netns/cni-c8725f61-3e58-b881-35e1-b739e2d955b3" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.783 [INFO][4128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.784 [INFO][4128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.991 [INFO][4165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.992 [INFO][4165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:39.992 [INFO][4165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:40.042 [WARNING][4165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:40.042 [INFO][4165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:40.058 [INFO][4165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:40.076376 containerd[1514]: 2025-09-13 01:57:40.071 [INFO][4128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:40.078942 containerd[1514]: time="2025-09-13T01:57:40.077571807Z" level=info msg="TearDown network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\" successfully" Sep 13 01:57:40.078942 containerd[1514]: time="2025-09-13T01:57:40.077606534Z" level=info msg="StopPodSandbox for \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\" returns successfully" Sep 13 01:57:40.081291 containerd[1514]: time="2025-09-13T01:57:40.079475680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wjgs4,Uid:0c8637e6-080f-42e2-bed2-e192627db354,Namespace:calico-system,Attempt:1,}" Sep 13 01:57:40.108767 containerd[1514]: time="2025-09-13T01:57:40.107058199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:40.108767 containerd[1514]: time="2025-09-13T01:57:40.108648975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:40.108767 containerd[1514]: time="2025-09-13T01:57:40.108672433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:40.110488 containerd[1514]: time="2025-09-13T01:57:40.109171918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:40.168399 systemd[1]: Started cri-containerd-2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573.scope - libcontainer container 2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573. Sep 13 01:57:40.473503 systemd-networkd[1436]: caliecebe858f31: Link UP Sep 13 01:57:40.478977 systemd-networkd[1436]: caliecebe858f31: Gained carrier Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.193 [INFO][4229] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.221 [INFO][4229] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0 calico-kube-controllers-5c587bb5c5- calico-system cc69bc79-4f97-4def-9cfb-3dbc4bb33d44 946 0 2025-09-13 01:57:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c587bb5c5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com calico-kube-controllers-5c587bb5c5-cqcsm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliecebe858f31 [] [] }} ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.222 [INFO][4229] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.379 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" HandleID="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.379 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" HandleID="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000349530), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"calico-kube-controllers-5c587bb5c5-cqcsm", "timestamp":"2025-09-13 01:57:40.379608822 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.379 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.380 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.380 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.398 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.406 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.415 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.419 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.423 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.423 [INFO][4277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.425 [INFO][4277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405 Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.432 [INFO][4277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.441 [INFO][4277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.195/26] block=192.168.100.192/26 handle="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.441 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.195/26] handle="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.441 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:40.516054 containerd[1514]: 2025-09-13 01:57:40.442 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.195/26] IPv6=[] ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" HandleID="k8s-pod-network.c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:40.518668 containerd[1514]: 2025-09-13 01:57:40.449 [INFO][4229] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0", GenerateName:"calico-kube-controllers-5c587bb5c5-", Namespace:"calico-system", SelfLink:"", UID:"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c587bb5c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5c587bb5c5-cqcsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecebe858f31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:40.518668 containerd[1514]: 2025-09-13 01:57:40.450 [INFO][4229] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.195/32] ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:40.518668 containerd[1514]: 2025-09-13 01:57:40.450 [INFO][4229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecebe858f31 ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:40.518668 containerd[1514]: 2025-09-13 01:57:40.481 [INFO][4229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:40.518668 containerd[1514]: 2025-09-13 01:57:40.482 [INFO][4229] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0", GenerateName:"calico-kube-controllers-5c587bb5c5-", Namespace:"calico-system", SelfLink:"", UID:"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c587bb5c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405", Pod:"calico-kube-controllers-5c587bb5c5-cqcsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecebe858f31", MAC:"ce:1f:31:97:af:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:40.518668 containerd[1514]: 2025-09-13 01:57:40.510 [INFO][4229] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405" Namespace="calico-system" Pod="calico-kube-controllers-5c587bb5c5-cqcsm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:40.520472 containerd[1514]: time="2025-09-13T01:57:40.519501064Z" level=info msg="StopPodSandbox for \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\"" Sep 13 01:57:40.648408 containerd[1514]: time="2025-09-13T01:57:40.646979507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:40.648408 containerd[1514]: time="2025-09-13T01:57:40.647073357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:40.648408 containerd[1514]: time="2025-09-13T01:57:40.647095563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:40.652939 containerd[1514]: time="2025-09-13T01:57:40.650468224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:40.686040 systemd-networkd[1436]: cali854fbd33f88: Link UP Sep 13 01:57:40.686949 systemd-networkd[1436]: cali854fbd33f88: Gained carrier Sep 13 01:57:40.688741 systemd[1]: run-netns-cni\x2dc8725f61\x2d3e58\x2db881\x2d35e1\x2db739e2d955b3.mount: Deactivated successfully. Sep 13 01:57:40.723961 containerd[1514]: time="2025-09-13T01:57:40.723815638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55489fb99-r5qhw,Uid:a83f473c-8cc6-4e14-a06a-2b9faf3fe5ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573\"" Sep 13 01:57:40.729466 containerd[1514]: time="2025-09-13T01:57:40.729367211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 01:57:40.746415 systemd[1]: Started cri-containerd-c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405.scope - libcontainer container c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405. Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.243 [INFO][4252] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.282 [INFO][4252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0 goldmane-7988f88666- calico-system 0c8637e6-080f-42e2-bed2-e192627db354 947 0 2025-09-13 01:57:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com goldmane-7988f88666-wjgs4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali854fbd33f88 [] [] }} ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.285 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.409 [INFO][4282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" HandleID="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.409 [INFO][4282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" HandleID="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102030), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"goldmane-7988f88666-wjgs4", "timestamp":"2025-09-13 01:57:40.409246999 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.410 [INFO][4282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.441 [INFO][4282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.443 [INFO][4282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.497 [INFO][4282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.530 [INFO][4282] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.550 [INFO][4282] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.561 [INFO][4282] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.579 [INFO][4282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.580 [INFO][4282] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.587 [INFO][4282] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52 Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.609 [INFO][4282] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.641 [INFO][4282] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.196/26] block=192.168.100.192/26 handle="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.641 [INFO][4282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.196/26] handle="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.641 [INFO][4282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:40.764400 containerd[1514]: 2025-09-13 01:57:40.641 [INFO][4282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.196/26] IPv6=[] ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" HandleID="k8s-pod-network.88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.766073 containerd[1514]: 2025-09-13 01:57:40.667 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"0c8637e6-080f-42e2-bed2-e192627db354", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7988f88666-wjgs4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali854fbd33f88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:40.766073 containerd[1514]: 2025-09-13 01:57:40.668 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.196/32] ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.766073 containerd[1514]: 2025-09-13 01:57:40.669 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali854fbd33f88 ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.766073 containerd[1514]: 2025-09-13 01:57:40.690 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.766073 containerd[1514]: 2025-09-13 01:57:40.712 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"0c8637e6-080f-42e2-bed2-e192627db354", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52", Pod:"goldmane-7988f88666-wjgs4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali854fbd33f88", MAC:"2e:6e:8a:53:23:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:40.766073 containerd[1514]: 2025-09-13 01:57:40.755 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52" Namespace="calico-system" Pod="goldmane-7988f88666-wjgs4" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:40.828511 containerd[1514]: time="2025-09-13T01:57:40.827238862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:40.828511 containerd[1514]: time="2025-09-13T01:57:40.828233382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:40.828511 containerd[1514]: time="2025-09-13T01:57:40.828253192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:40.828511 containerd[1514]: time="2025-09-13T01:57:40.828374295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:40.900414 systemd[1]: Started cri-containerd-88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52.scope - libcontainer container 88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52. Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.838 [INFO][4310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.843 [INFO][4310] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" iface="eth0" netns="/var/run/netns/cni-bbd96789-d0a9-927c-5e4e-d470dfbf25ab" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.843 [INFO][4310] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" iface="eth0" netns="/var/run/netns/cni-bbd96789-d0a9-927c-5e4e-d470dfbf25ab" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.844 [INFO][4310] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" iface="eth0" netns="/var/run/netns/cni-bbd96789-d0a9-927c-5e4e-d470dfbf25ab" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.844 [INFO][4310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.844 [INFO][4310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.940 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.940 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.941 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.955 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.956 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.958 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:40.967254 containerd[1514]: 2025-09-13 01:57:40.964 [INFO][4310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:40.971998 containerd[1514]: time="2025-09-13T01:57:40.971959110Z" level=info msg="TearDown network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\" successfully" Sep 13 01:57:40.972138 containerd[1514]: time="2025-09-13T01:57:40.972110163Z" level=info msg="StopPodSandbox for \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\" returns successfully" Sep 13 01:57:40.972986 systemd[1]: run-netns-cni\x2dbbd96789\x2dd0a9\x2d927c\x2d5e4e\x2dd470dfbf25ab.mount: Deactivated successfully. Sep 13 01:57:40.976675 containerd[1514]: time="2025-09-13T01:57:40.976389377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pmxvt,Uid:4497aabf-19b8-4111-a199-50d6361d00e3,Namespace:kube-system,Attempt:1,}" Sep 13 01:57:41.004781 systemd-networkd[1436]: calib083811551e: Gained IPv6LL Sep 13 01:57:41.290790 containerd[1514]: time="2025-09-13T01:57:41.290494024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c587bb5c5-cqcsm,Uid:cc69bc79-4f97-4def-9cfb-3dbc4bb33d44,Namespace:calico-system,Attempt:1,} returns sandbox id \"c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405\"" Sep 13 01:57:41.300065 containerd[1514]: time="2025-09-13T01:57:41.299420102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-wjgs4,Uid:0c8637e6-080f-42e2-bed2-e192627db354,Namespace:calico-system,Attempt:1,} returns sandbox id \"88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52\"" Sep 13 01:57:41.403716 systemd-networkd[1436]: cali10c7e668088: Link UP Sep 13 01:57:41.406110 systemd-networkd[1436]: cali10c7e668088: Gained carrier Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.100 [INFO][4407] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.152 [INFO][4407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0 coredns-7c65d6cfc9- kube-system 4497aabf-19b8-4111-a199-50d6361d00e3 965 0 2025-09-13 01:56:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com coredns-7c65d6cfc9-pmxvt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10c7e668088 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.152 [INFO][4407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.333 [INFO][4422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" HandleID="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.333 [INFO][4422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" HandleID="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038fe70), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-pmxvt", "timestamp":"2025-09-13 01:57:41.333014804 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.333 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.333 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.333 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.346 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.354 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.361 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.365 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.370 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.370 [INFO][4422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.372 [INFO][4422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84 Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.378 [INFO][4422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.390 [INFO][4422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.197/26] block=192.168.100.192/26 handle="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.390 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.197/26] handle="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.390 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:41.430694 containerd[1514]: 2025-09-13 01:57:41.390 [INFO][4422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.197/26] IPv6=[] ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" HandleID="k8s-pod-network.f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:41.435479 containerd[1514]: 2025-09-13 01:57:41.393 [INFO][4407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4497aabf-19b8-4111-a199-50d6361d00e3", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-pmxvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10c7e668088", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:41.435479 containerd[1514]: 2025-09-13 01:57:41.394 [INFO][4407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.197/32] ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:41.435479 containerd[1514]: 2025-09-13 01:57:41.395 [INFO][4407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10c7e668088 ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:41.435479 containerd[1514]: 2025-09-13 01:57:41.408 [INFO][4407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:41.435479 containerd[1514]: 2025-09-13 01:57:41.409 [INFO][4407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4497aabf-19b8-4111-a199-50d6361d00e3", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84", Pod:"coredns-7c65d6cfc9-pmxvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10c7e668088", MAC:"4e:2d:f0:c5:3d:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:41.435479 containerd[1514]: 2025-09-13 01:57:41.426 [INFO][4407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pmxvt" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:41.478745 containerd[1514]: time="2025-09-13T01:57:41.477355985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:41.478745 containerd[1514]: time="2025-09-13T01:57:41.477487108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:41.478745 containerd[1514]: time="2025-09-13T01:57:41.477523300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:41.478745 containerd[1514]: time="2025-09-13T01:57:41.477729910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:41.518805 systemd[1]: Started cri-containerd-f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84.scope - libcontainer container f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84. Sep 13 01:57:41.526462 containerd[1514]: time="2025-09-13T01:57:41.525862222Z" level=info msg="StopPodSandbox for \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\"" Sep 13 01:57:41.684998 containerd[1514]: time="2025-09-13T01:57:41.684899230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pmxvt,Uid:4497aabf-19b8-4111-a199-50d6361d00e3,Namespace:kube-system,Attempt:1,} returns sandbox id \"f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84\"" Sep 13 01:57:41.692740 containerd[1514]: time="2025-09-13T01:57:41.692617050Z" level=info msg="CreateContainer within sandbox \"f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:57:41.723293 containerd[1514]: time="2025-09-13T01:57:41.723246135Z" level=info msg="CreateContainer within sandbox \"f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e9221cdd3bf77ace324c770104cb24586b98913fb8fedb0f662a26bab18c95c4\"" Sep 13 01:57:41.724941 containerd[1514]: time="2025-09-13T01:57:41.724690753Z" level=info msg="StartContainer for \"e9221cdd3bf77ace324c770104cb24586b98913fb8fedb0f662a26bab18c95c4\"" Sep 13 01:57:41.764447 systemd-networkd[1436]: cali91b94c25c80: Gained IPv6LL Sep 13 01:57:41.800598 systemd[1]: Started cri-containerd-e9221cdd3bf77ace324c770104cb24586b98913fb8fedb0f662a26bab18c95c4.scope - libcontainer container e9221cdd3bf77ace324c770104cb24586b98913fb8fedb0f662a26bab18c95c4. Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.704 [INFO][4489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.706 [INFO][4489] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" iface="eth0" netns="/var/run/netns/cni-61af4b3a-7ed6-2968-6c78-158bc62aa57a" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.708 [INFO][4489] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" iface="eth0" netns="/var/run/netns/cni-61af4b3a-7ed6-2968-6c78-158bc62aa57a" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.709 [INFO][4489] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" iface="eth0" netns="/var/run/netns/cni-61af4b3a-7ed6-2968-6c78-158bc62aa57a" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.711 [INFO][4489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.711 [INFO][4489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.830 [INFO][4508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.831 [INFO][4508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.831 [INFO][4508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.846 [WARNING][4508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.846 [INFO][4508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.852 [INFO][4508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:41.857596 containerd[1514]: 2025-09-13 01:57:41.854 [INFO][4489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:41.862884 containerd[1514]: time="2025-09-13T01:57:41.857545549Z" level=info msg="TearDown network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\" successfully" Sep 13 01:57:41.862884 containerd[1514]: time="2025-09-13T01:57:41.858154166Z" level=info msg="StopPodSandbox for \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\" returns successfully" Sep 13 01:57:41.862884 containerd[1514]: time="2025-09-13T01:57:41.860029199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8zdz,Uid:96f62ef8-50ce-46be-8601-56da0c0ae5a1,Namespace:calico-system,Attempt:1,}" Sep 13 01:57:41.924371 containerd[1514]: time="2025-09-13T01:57:41.924021424Z" level=info msg="StartContainer for \"e9221cdd3bf77ace324c770104cb24586b98913fb8fedb0f662a26bab18c95c4\" returns successfully" Sep 13 01:57:42.084377 systemd-networkd[1436]: cali854fbd33f88: Gained IPv6LL Sep 13 01:57:42.160211 systemd-networkd[1436]: cali6a015f84ed4: Link UP Sep 13 01:57:42.161167 systemd-networkd[1436]: cali6a015f84ed4: Gained carrier Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:41.955 [INFO][4545] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:41.996 [INFO][4545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0 csi-node-driver- calico-system 96f62ef8-50ce-46be-8601-56da0c0ae5a1 980 0 2025-09-13 01:57:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com csi-node-driver-z8zdz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6a015f84ed4 [] [] }} ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:41.997 [INFO][4545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.075 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" HandleID="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.076 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" HandleID="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"csi-node-driver-z8zdz", "timestamp":"2025-09-13 01:57:42.075925494 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.076 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.076 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.076 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.105 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.111 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.118 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.120 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.123 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.123 [INFO][4563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.126 [INFO][4563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739 Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.133 [INFO][4563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.147 [INFO][4563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.198/26] block=192.168.100.192/26 handle="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.148 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.198/26] handle="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.148 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:42.193600 containerd[1514]: 2025-09-13 01:57:42.148 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.198/26] IPv6=[] ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" HandleID="k8s-pod-network.7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:42.197210 containerd[1514]: 2025-09-13 01:57:42.151 [INFO][4545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96f62ef8-50ce-46be-8601-56da0c0ae5a1", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-z8zdz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a015f84ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:42.197210 containerd[1514]: 2025-09-13 01:57:42.152 [INFO][4545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.198/32] ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:42.197210 containerd[1514]: 2025-09-13 01:57:42.152 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a015f84ed4 ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:42.197210 containerd[1514]: 2025-09-13 01:57:42.162 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:42.197210 containerd[1514]: 2025-09-13 01:57:42.163 [INFO][4545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96f62ef8-50ce-46be-8601-56da0c0ae5a1", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739", Pod:"csi-node-driver-z8zdz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a015f84ed4", MAC:"c6:86:36:0b:90:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:42.197210 containerd[1514]: 2025-09-13 01:57:42.183 [INFO][4545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739" Namespace="calico-system" Pod="csi-node-driver-z8zdz" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:42.212998 systemd-networkd[1436]: caliecebe858f31: Gained IPv6LL Sep 13 01:57:42.236570 containerd[1514]: time="2025-09-13T01:57:42.236240189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:42.236570 containerd[1514]: time="2025-09-13T01:57:42.236320756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:42.236570 containerd[1514]: time="2025-09-13T01:57:42.236337446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:42.236570 containerd[1514]: time="2025-09-13T01:57:42.236467768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:42.271414 systemd[1]: Started cri-containerd-7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739.scope - libcontainer container 7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739. Sep 13 01:57:42.342559 containerd[1514]: time="2025-09-13T01:57:42.341779749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z8zdz,Uid:96f62ef8-50ce-46be-8601-56da0c0ae5a1,Namespace:calico-system,Attempt:1,} returns sandbox id \"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739\"" Sep 13 01:57:42.532438 systemd-networkd[1436]: cali10c7e668088: Gained IPv6LL Sep 13 01:57:42.556285 kernel: bpftool[4655]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 01:57:42.683019 systemd[1]: run-containerd-runc-k8s.io-e9221cdd3bf77ace324c770104cb24586b98913fb8fedb0f662a26bab18c95c4-runc.j9aBbq.mount: Deactivated successfully. Sep 13 01:57:42.683350 systemd[1]: run-netns-cni\x2d61af4b3a\x2d7ed6\x2d2968\x2d6c78\x2d158bc62aa57a.mount: Deactivated successfully. Sep 13 01:57:42.938957 kubelet[2683]: I0913 01:57:42.935207 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pmxvt" podStartSLOduration=44.935165052 podStartE2EDuration="44.935165052s" podCreationTimestamp="2025-09-13 01:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:57:42.934744855 +0000 UTC m=+49.620155915" watchObservedRunningTime="2025-09-13 01:57:42.935165052 +0000 UTC m=+49.620576099" Sep 13 01:57:42.955048 containerd[1514]: time="2025-09-13T01:57:42.954806450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:42.957371 containerd[1514]: time="2025-09-13T01:57:42.955956616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 01:57:42.958875 containerd[1514]: time="2025-09-13T01:57:42.958144112Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:42.965385 containerd[1514]: time="2025-09-13T01:57:42.964953804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:42.967958 containerd[1514]: time="2025-09-13T01:57:42.967465156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.237819442s" Sep 13 01:57:42.967958 containerd[1514]: time="2025-09-13T01:57:42.967520911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 01:57:42.970071 containerd[1514]: time="2025-09-13T01:57:42.969632313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 01:57:42.972906 containerd[1514]: time="2025-09-13T01:57:42.972866947Z" level=info msg="CreateContainer within sandbox \"2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 01:57:42.995398 containerd[1514]: time="2025-09-13T01:57:42.995335144Z" level=info msg="CreateContainer within sandbox \"2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e20e94363748332d29c095e414125fc785e7aa4ef7f6f0ec98b8b74f198d3acd\"" Sep 13 01:57:43.000984 containerd[1514]: time="2025-09-13T01:57:42.998498783Z" level=info msg="StartContainer for \"e20e94363748332d29c095e414125fc785e7aa4ef7f6f0ec98b8b74f198d3acd\"" Sep 13 01:57:43.096406 systemd[1]: Started cri-containerd-e20e94363748332d29c095e414125fc785e7aa4ef7f6f0ec98b8b74f198d3acd.scope - libcontainer container e20e94363748332d29c095e414125fc785e7aa4ef7f6f0ec98b8b74f198d3acd. Sep 13 01:57:43.282240 containerd[1514]: time="2025-09-13T01:57:43.282084579Z" level=info msg="StartContainer for \"e20e94363748332d29c095e414125fc785e7aa4ef7f6f0ec98b8b74f198d3acd\" returns successfully" Sep 13 01:57:43.426509 systemd-networkd[1436]: vxlan.calico: Link UP Sep 13 01:57:43.429307 systemd-networkd[1436]: vxlan.calico: Gained carrier Sep 13 01:57:43.492623 systemd-networkd[1436]: cali6a015f84ed4: Gained IPv6LL Sep 13 01:57:43.532536 containerd[1514]: time="2025-09-13T01:57:43.532400409Z" level=info msg="StopPodSandbox for \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\"" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.624 [INFO][4752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.625 [INFO][4752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" iface="eth0" netns="/var/run/netns/cni-c4d69d38-e5ed-7714-ac66-09a5c764dc25" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.630 [INFO][4752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" iface="eth0" netns="/var/run/netns/cni-c4d69d38-e5ed-7714-ac66-09a5c764dc25" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.630 [INFO][4752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" iface="eth0" netns="/var/run/netns/cni-c4d69d38-e5ed-7714-ac66-09a5c764dc25" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.630 [INFO][4752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.630 [INFO][4752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.693 [INFO][4760] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.694 [INFO][4760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.694 [INFO][4760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.704 [WARNING][4760] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.704 [INFO][4760] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.707 [INFO][4760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:43.715313 containerd[1514]: 2025-09-13 01:57:43.709 [INFO][4752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:43.715313 containerd[1514]: time="2025-09-13T01:57:43.712014676Z" level=info msg="TearDown network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\" successfully" Sep 13 01:57:43.715313 containerd[1514]: time="2025-09-13T01:57:43.712048166Z" level=info msg="StopPodSandbox for \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\" returns successfully" Sep 13 01:57:43.715313 containerd[1514]: time="2025-09-13T01:57:43.712887258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-jhdbm,Uid:aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19,Namespace:calico-apiserver,Attempt:1,}" Sep 13 01:57:43.716894 systemd[1]: run-netns-cni\x2dc4d69d38\x2de5ed\x2d7714\x2dac66\x2d09a5c764dc25.mount: Deactivated successfully. Sep 13 01:57:44.010712 systemd-networkd[1436]: cali07b7c3edc3f: Link UP Sep 13 01:57:44.011049 systemd-networkd[1436]: cali07b7c3edc3f: Gained carrier Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.855 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0 calico-apiserver-845b7845d- calico-apiserver aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19 1006 0 2025-09-13 01:57:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:845b7845d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com calico-apiserver-845b7845d-jhdbm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali07b7c3edc3f [] [] }} ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.857 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.924 [INFO][4781] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" HandleID="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.925 [INFO][4781] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" HandleID="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"calico-apiserver-845b7845d-jhdbm", "timestamp":"2025-09-13 01:57:43.924622619 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.925 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.925 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.925 [INFO][4781] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.936 [INFO][4781] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.947 [INFO][4781] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.954 [INFO][4781] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.958 [INFO][4781] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.962 [INFO][4781] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.964 [INFO][4781] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.970 [INFO][4781] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9 Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.980 [INFO][4781] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.995 [INFO][4781] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.199/26] block=192.168.100.192/26 handle="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.996 [INFO][4781] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.199/26] handle="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.996 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:44.058510 containerd[1514]: 2025-09-13 01:57:43.996 [INFO][4781] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.199/26] IPv6=[] ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" HandleID="k8s-pod-network.b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:44.061349 containerd[1514]: 2025-09-13 01:57:44.001 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-845b7845d-jhdbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b7c3edc3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:44.061349 containerd[1514]: 2025-09-13 01:57:44.002 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.199/32] ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:44.061349 containerd[1514]: 2025-09-13 01:57:44.002 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali07b7c3edc3f ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:44.061349 containerd[1514]: 2025-09-13 01:57:44.007 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:44.061349 containerd[1514]: 2025-09-13 01:57:44.013 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9", Pod:"calico-apiserver-845b7845d-jhdbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b7c3edc3f", MAC:"8a:2b:9d:d3:77:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:44.061349 containerd[1514]: 2025-09-13 01:57:44.051 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-jhdbm" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:44.124434 containerd[1514]: time="2025-09-13T01:57:44.124244217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:44.124599 containerd[1514]: time="2025-09-13T01:57:44.124471703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:44.124679 containerd[1514]: time="2025-09-13T01:57:44.124551849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:44.125732 containerd[1514]: time="2025-09-13T01:57:44.124940707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:44.202399 systemd[1]: Started cri-containerd-b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9.scope - libcontainer container b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9. Sep 13 01:57:44.280364 containerd[1514]: time="2025-09-13T01:57:44.280031603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-jhdbm,Uid:aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9\"" Sep 13 01:57:44.521046 containerd[1514]: time="2025-09-13T01:57:44.520663135Z" level=info msg="StopPodSandbox for \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\"" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.621 [INFO][4887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.622 [INFO][4887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" iface="eth0" netns="/var/run/netns/cni-58997184-7ac6-ab05-e503-115cd6d99e24" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.623 [INFO][4887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" iface="eth0" netns="/var/run/netns/cni-58997184-7ac6-ab05-e503-115cd6d99e24" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.623 [INFO][4887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" iface="eth0" netns="/var/run/netns/cni-58997184-7ac6-ab05-e503-115cd6d99e24" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.623 [INFO][4887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.624 [INFO][4887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.688 [INFO][4895] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.688 [INFO][4895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.688 [INFO][4895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.706 [WARNING][4895] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.706 [INFO][4895] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.707 [INFO][4895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:44.712963 containerd[1514]: 2025-09-13 01:57:44.709 [INFO][4887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:44.717251 containerd[1514]: time="2025-09-13T01:57:44.716280154Z" level=info msg="TearDown network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\" successfully" Sep 13 01:57:44.717251 containerd[1514]: time="2025-09-13T01:57:44.716331921Z" level=info msg="StopPodSandbox for \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\" returns successfully" Sep 13 01:57:44.719365 systemd[1]: run-netns-cni\x2d58997184\x2d7ac6\x2dab05\x2de503\x2d115cd6d99e24.mount: Deactivated successfully. Sep 13 01:57:44.720168 containerd[1514]: time="2025-09-13T01:57:44.719390087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-f9ljs,Uid:beece634-0084-4df9-841c-840e1e607f03,Namespace:calico-apiserver,Attempt:1,}" Sep 13 01:57:44.974271 systemd-networkd[1436]: cali615b196e54f: Link UP Sep 13 01:57:44.976367 systemd-networkd[1436]: cali615b196e54f: Gained carrier Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.802 [INFO][4902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0 calico-apiserver-845b7845d- calico-apiserver beece634-0084-4df9-841c-840e1e607f03 1014 0 2025-09-13 01:57:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:845b7845d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-vx6h6.gb1.brightbox.com calico-apiserver-845b7845d-f9ljs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali615b196e54f [] [] }} ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.803 [INFO][4902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.877 [INFO][4915] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" HandleID="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.881 [INFO][4915] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" HandleID="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-vx6h6.gb1.brightbox.com", "pod":"calico-apiserver-845b7845d-f9ljs", "timestamp":"2025-09-13 01:57:44.87730272 +0000 UTC"}, Hostname:"srv-vx6h6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.888 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.888 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.888 [INFO][4915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vx6h6.gb1.brightbox.com' Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.901 [INFO][4915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.911 [INFO][4915] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.921 [INFO][4915] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.926 [INFO][4915] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.934 [INFO][4915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.935 [INFO][4915] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.939 [INFO][4915] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6 Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.952 [INFO][4915] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.961 [INFO][4915] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.200/26] block=192.168.100.192/26 handle="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.962 [INFO][4915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.200/26] handle="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" host="srv-vx6h6.gb1.brightbox.com" Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.962 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:45.012709 containerd[1514]: 2025-09-13 01:57:44.962 [INFO][4915] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.200/26] IPv6=[] ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" HandleID="k8s-pod-network.11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:45.014530 containerd[1514]: 2025-09-13 01:57:44.968 [INFO][4902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"beece634-0084-4df9-841c-840e1e607f03", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-845b7845d-f9ljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali615b196e54f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:45.014530 containerd[1514]: 2025-09-13 01:57:44.968 [INFO][4902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.200/32] ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:45.014530 containerd[1514]: 2025-09-13 01:57:44.968 [INFO][4902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali615b196e54f ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:45.014530 containerd[1514]: 2025-09-13 01:57:44.980 [INFO][4902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:45.014530 containerd[1514]: 2025-09-13 01:57:44.980 [INFO][4902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"beece634-0084-4df9-841c-840e1e607f03", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6", Pod:"calico-apiserver-845b7845d-f9ljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali615b196e54f", MAC:"b2:6f:19:6a:d5:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:45.014530 containerd[1514]: 2025-09-13 01:57:45.003 [INFO][4902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6" Namespace="calico-apiserver" Pod="calico-apiserver-845b7845d-f9ljs" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:45.064235 containerd[1514]: time="2025-09-13T01:57:45.062651895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:57:45.064235 containerd[1514]: time="2025-09-13T01:57:45.062772452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:57:45.064235 containerd[1514]: time="2025-09-13T01:57:45.062791815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:45.064235 containerd[1514]: time="2025-09-13T01:57:45.062962851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:57:45.092510 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Sep 13 01:57:45.119530 systemd[1]: Started cri-containerd-11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6.scope - libcontainer container 11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6. Sep 13 01:57:45.194160 containerd[1514]: time="2025-09-13T01:57:45.194106284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-845b7845d-f9ljs,Uid:beece634-0084-4df9-841c-840e1e607f03,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6\"" Sep 13 01:57:45.605802 systemd-networkd[1436]: cali07b7c3edc3f: Gained IPv6LL Sep 13 01:57:46.948568 systemd-networkd[1436]: cali615b196e54f: Gained IPv6LL Sep 13 01:57:47.075236 containerd[1514]: time="2025-09-13T01:57:47.074642142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:47.077807 containerd[1514]: time="2025-09-13T01:57:47.077626388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 01:57:47.079071 containerd[1514]: time="2025-09-13T01:57:47.079008455Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:47.082413 containerd[1514]: time="2025-09-13T01:57:47.082336250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:47.084678 containerd[1514]: time="2025-09-13T01:57:47.084634068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.114953577s" Sep 13 01:57:47.084903 containerd[1514]: time="2025-09-13T01:57:47.084778012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 01:57:47.086850 containerd[1514]: time="2025-09-13T01:57:47.086577664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 01:57:47.110627 containerd[1514]: time="2025-09-13T01:57:47.109491765Z" level=info msg="CreateContainer within sandbox \"c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 01:57:47.139601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2243804073.mount: Deactivated successfully. Sep 13 01:57:47.141562 containerd[1514]: time="2025-09-13T01:57:47.141415985Z" level=info msg="CreateContainer within sandbox \"c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"88a5aaa7e35645988c2cccb8d9cf61d034e2cf4cb8badc6345f1b45c4d47bf0e\"" Sep 13 01:57:47.143930 containerd[1514]: time="2025-09-13T01:57:47.142379921Z" level=info msg="StartContainer for \"88a5aaa7e35645988c2cccb8d9cf61d034e2cf4cb8badc6345f1b45c4d47bf0e\"" Sep 13 01:57:47.193427 systemd[1]: Started cri-containerd-88a5aaa7e35645988c2cccb8d9cf61d034e2cf4cb8badc6345f1b45c4d47bf0e.scope - libcontainer container 88a5aaa7e35645988c2cccb8d9cf61d034e2cf4cb8badc6345f1b45c4d47bf0e. Sep 13 01:57:47.259885 containerd[1514]: time="2025-09-13T01:57:47.259749945Z" level=info msg="StartContainer for \"88a5aaa7e35645988c2cccb8d9cf61d034e2cf4cb8badc6345f1b45c4d47bf0e\" returns successfully" Sep 13 01:57:47.995210 kubelet[2683]: I0913 01:57:47.993250 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c587bb5c5-cqcsm" podStartSLOduration=29.208999834 podStartE2EDuration="34.993158385s" podCreationTimestamp="2025-09-13 01:57:13 +0000 UTC" firstStartedPulling="2025-09-13 01:57:41.301689337 +0000 UTC m=+47.987100378" lastFinishedPulling="2025-09-13 01:57:47.085847882 +0000 UTC m=+53.771258929" observedRunningTime="2025-09-13 01:57:47.989448001 +0000 UTC m=+54.674859050" watchObservedRunningTime="2025-09-13 01:57:47.993158385 +0000 UTC m=+54.678569432" Sep 13 01:57:51.243453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2664917068.mount: Deactivated successfully. Sep 13 01:57:52.216447 containerd[1514]: time="2025-09-13T01:57:52.216316733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:52.218357 containerd[1514]: time="2025-09-13T01:57:52.218107113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 01:57:52.221095 containerd[1514]: time="2025-09-13T01:57:52.219507091Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:52.222683 containerd[1514]: time="2025-09-13T01:57:52.222639521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:52.245319 containerd[1514]: time="2025-09-13T01:57:52.245240118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.158584025s" Sep 13 01:57:52.245319 containerd[1514]: time="2025-09-13T01:57:52.245307324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 01:57:52.248089 containerd[1514]: time="2025-09-13T01:57:52.248012697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 01:57:52.254688 containerd[1514]: time="2025-09-13T01:57:52.254494591Z" level=info msg="CreateContainer within sandbox \"88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 01:57:52.280697 containerd[1514]: time="2025-09-13T01:57:52.280644330Z" level=info msg="CreateContainer within sandbox \"88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69\"" Sep 13 01:57:52.282745 containerd[1514]: time="2025-09-13T01:57:52.282479268Z" level=info msg="StartContainer for \"e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69\"" Sep 13 01:57:52.388415 systemd[1]: Started cri-containerd-e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69.scope - libcontainer container e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69. Sep 13 01:57:52.471915 containerd[1514]: time="2025-09-13T01:57:52.471364552Z" level=info msg="StartContainer for \"e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69\" returns successfully" Sep 13 01:57:53.022233 kubelet[2683]: I0913 01:57:53.021203 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-wjgs4" podStartSLOduration=29.079926523 podStartE2EDuration="40.021098119s" podCreationTimestamp="2025-09-13 01:57:13 +0000 UTC" firstStartedPulling="2025-09-13 01:57:41.305671482 +0000 UTC m=+47.991082517" lastFinishedPulling="2025-09-13 01:57:52.246843078 +0000 UTC m=+58.932254113" observedRunningTime="2025-09-13 01:57:53.020548937 +0000 UTC m=+59.705959992" watchObservedRunningTime="2025-09-13 01:57:53.021098119 +0000 UTC m=+59.706509158" Sep 13 01:57:53.735565 containerd[1514]: time="2025-09-13T01:57:53.733046944Z" level=info msg="StopPodSandbox for \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\"" Sep 13 01:57:54.152151 systemd[1]: run-containerd-runc-k8s.io-e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69-runc.3k1iRM.mount: Deactivated successfully. Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:53.975 [WARNING][5156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0", GenerateName:"calico-kube-controllers-5c587bb5c5-", Namespace:"calico-system", SelfLink:"", UID:"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c587bb5c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405", Pod:"calico-kube-controllers-5c587bb5c5-cqcsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecebe858f31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:53.978 [INFO][5156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:53.978 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" iface="eth0" netns="" Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:53.978 [INFO][5156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:53.978 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:54.364 [INFO][5167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:54.377 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:54.378 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:54.443 [WARNING][5167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:54.443 [INFO][5167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:54.454 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:54.474676 containerd[1514]: 2025-09-13 01:57:54.465 [INFO][5156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:54.474676 containerd[1514]: time="2025-09-13T01:57:54.474501091Z" level=info msg="TearDown network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\" successfully" Sep 13 01:57:54.474676 containerd[1514]: time="2025-09-13T01:57:54.474538960Z" level=info msg="StopPodSandbox for \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\" returns successfully" Sep 13 01:57:54.524739 containerd[1514]: time="2025-09-13T01:57:54.523605904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:54.527950 containerd[1514]: time="2025-09-13T01:57:54.527895561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 01:57:54.531622 containerd[1514]: time="2025-09-13T01:57:54.529273380Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:54.542272 containerd[1514]: time="2025-09-13T01:57:54.540315921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:54.542272 containerd[1514]: time="2025-09-13T01:57:54.540806641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.292747402s" Sep 13 01:57:54.542272 containerd[1514]: time="2025-09-13T01:57:54.540841079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 01:57:54.556877 containerd[1514]: time="2025-09-13T01:57:54.556826510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 01:57:54.612641 containerd[1514]: time="2025-09-13T01:57:54.612560670Z" level=info msg="RemovePodSandbox for \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\"" Sep 13 01:57:54.618703 containerd[1514]: time="2025-09-13T01:57:54.618528869Z" level=info msg="CreateContainer within sandbox \"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 01:57:54.625227 containerd[1514]: time="2025-09-13T01:57:54.625123106Z" level=info msg="Forcibly stopping sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\"" Sep 13 01:57:54.678878 containerd[1514]: time="2025-09-13T01:57:54.678777204Z" level=info msg="CreateContainer within sandbox \"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"193c23b8a1179ccc2612a10b35c88a67ad60155406c3fe60c01aad004a3cf958\"" Sep 13 01:57:54.684253 containerd[1514]: time="2025-09-13T01:57:54.683185020Z" level=info msg="StartContainer for \"193c23b8a1179ccc2612a10b35c88a67ad60155406c3fe60c01aad004a3cf958\"" Sep 13 01:57:54.840447 systemd[1]: Started cri-containerd-193c23b8a1179ccc2612a10b35c88a67ad60155406c3fe60c01aad004a3cf958.scope - libcontainer container 193c23b8a1179ccc2612a10b35c88a67ad60155406c3fe60c01aad004a3cf958. Sep 13 01:57:55.080831 containerd[1514]: time="2025-09-13T01:57:55.080604792Z" level=info msg="StartContainer for \"193c23b8a1179ccc2612a10b35c88a67ad60155406c3fe60c01aad004a3cf958\" returns successfully" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:54.873 [WARNING][5207] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0", GenerateName:"calico-kube-controllers-5c587bb5c5-", Namespace:"calico-system", SelfLink:"", UID:"cc69bc79-4f97-4def-9cfb-3dbc4bb33d44", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c587bb5c5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"c98c17f10e9a7f40ed2d6ef0e8647d6cc9f8ec7b7d65ae698634f41474901405", Pod:"calico-kube-controllers-5c587bb5c5-cqcsm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliecebe858f31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:54.873 [INFO][5207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:54.873 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" iface="eth0" netns="" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:54.874 [INFO][5207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:54.874 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:55.029 [INFO][5238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:55.030 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:55.030 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:55.071 [WARNING][5238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:55.071 [INFO][5238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" HandleID="k8s-pod-network.8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--kube--controllers--5c587bb5c5--cqcsm-eth0" Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:55.074 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:55.095820 containerd[1514]: 2025-09-13 01:57:55.081 [INFO][5207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814" Sep 13 01:57:55.095820 containerd[1514]: time="2025-09-13T01:57:55.095642363Z" level=info msg="TearDown network for sandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\" successfully" Sep 13 01:57:55.196250 containerd[1514]: time="2025-09-13T01:57:55.191339714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:55.208733 systemd[1]: run-containerd-runc-k8s.io-e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69-runc.GsGWtw.mount: Deactivated successfully. Sep 13 01:57:55.267007 containerd[1514]: time="2025-09-13T01:57:55.266942057Z" level=info msg="RemovePodSandbox \"8ec191c4ccc8ae8782c38467fcc76750c0050c3f013f19c021f566a363fb9814\" returns successfully" Sep 13 01:57:55.282214 containerd[1514]: time="2025-09-13T01:57:55.281709423Z" level=info msg="StopPodSandbox for \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\"" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.424 [WARNING][5280] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.424 [INFO][5280] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.424 [INFO][5280] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" iface="eth0" netns="" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.424 [INFO][5280] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.424 [INFO][5280] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.507 [INFO][5288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.508 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.508 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.525 [WARNING][5288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.525 [INFO][5288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.530 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:55.539318 containerd[1514]: 2025-09-13 01:57:55.532 [INFO][5280] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.543530 containerd[1514]: time="2025-09-13T01:57:55.539335594Z" level=info msg="TearDown network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\" successfully" Sep 13 01:57:55.543530 containerd[1514]: time="2025-09-13T01:57:55.539386070Z" level=info msg="StopPodSandbox for \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\" returns successfully" Sep 13 01:57:55.543530 containerd[1514]: time="2025-09-13T01:57:55.540127476Z" level=info msg="RemovePodSandbox for \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\"" Sep 13 01:57:55.543530 containerd[1514]: time="2025-09-13T01:57:55.540164487Z" level=info msg="Forcibly stopping sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\"" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.627 [WARNING][5306] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" WorkloadEndpoint="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.627 [INFO][5306] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.627 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" iface="eth0" netns="" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.627 [INFO][5306] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.627 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.675 [INFO][5313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.675 [INFO][5313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.676 [INFO][5313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.692 [WARNING][5313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.692 [INFO][5313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" HandleID="k8s-pod-network.efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Workload="srv--vx6h6.gb1.brightbox.com-k8s-whisker--5dd9c997dd--pv5mw-eth0" Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.694 [INFO][5313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:55.699372 containerd[1514]: 2025-09-13 01:57:55.696 [INFO][5306] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449" Sep 13 01:57:55.700635 containerd[1514]: time="2025-09-13T01:57:55.699441543Z" level=info msg="TearDown network for sandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\" successfully" Sep 13 01:57:55.704213 containerd[1514]: time="2025-09-13T01:57:55.703620281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:55.705335 containerd[1514]: time="2025-09-13T01:57:55.704012143Z" level=info msg="RemovePodSandbox \"efa508a29eee869ae946d1fac461e70245509b4cdc7f2c529e1f5243b213f449\" returns successfully" Sep 13 01:57:55.706926 containerd[1514]: time="2025-09-13T01:57:55.706877758Z" level=info msg="StopPodSandbox for \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\"" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.777 [WARNING][5328] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9", Pod:"calico-apiserver-845b7845d-jhdbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b7c3edc3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.778 [INFO][5328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.778 [INFO][5328] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" iface="eth0" netns="" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.778 [INFO][5328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.779 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.850 [INFO][5335] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.850 [INFO][5335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.850 [INFO][5335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.859 [WARNING][5335] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.859 [INFO][5335] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.863 [INFO][5335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:55.869652 containerd[1514]: 2025-09-13 01:57:55.865 [INFO][5328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:55.872704 containerd[1514]: time="2025-09-13T01:57:55.869613712Z" level=info msg="TearDown network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\" successfully" Sep 13 01:57:55.872704 containerd[1514]: time="2025-09-13T01:57:55.869675337Z" level=info msg="StopPodSandbox for \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\" returns successfully" Sep 13 01:57:55.901693 containerd[1514]: time="2025-09-13T01:57:55.900404831Z" level=info msg="RemovePodSandbox for \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\"" Sep 13 01:57:55.901693 containerd[1514]: time="2025-09-13T01:57:55.900478936Z" level=info msg="Forcibly stopping sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\"" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:55.966 [WARNING][5349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aeca7b13-a3ef-47c3-b05a-b2c1fcfb4e19", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9", Pod:"calico-apiserver-845b7845d-jhdbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali07b7c3edc3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:55.967 [INFO][5349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:55.967 [INFO][5349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" iface="eth0" netns="" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:55.967 [INFO][5349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:55.967 [INFO][5349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:56.004 [INFO][5356] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:56.005 [INFO][5356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:56.005 [INFO][5356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:56.013 [WARNING][5356] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:56.014 [INFO][5356] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" HandleID="k8s-pod-network.d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--jhdbm-eth0" Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:56.016 [INFO][5356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:56.021140 containerd[1514]: 2025-09-13 01:57:56.018 [INFO][5349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226" Sep 13 01:57:56.022726 containerd[1514]: time="2025-09-13T01:57:56.021296530Z" level=info msg="TearDown network for sandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\" successfully" Sep 13 01:57:56.029116 containerd[1514]: time="2025-09-13T01:57:56.029064534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:56.029231 containerd[1514]: time="2025-09-13T01:57:56.029152425Z" level=info msg="RemovePodSandbox \"d516acdb5dd48a00da8282b729d96b6d069f5d6ca3fd600ac281f2cf62d63226\" returns successfully" Sep 13 01:57:56.029880 containerd[1514]: time="2025-09-13T01:57:56.029846574Z" level=info msg="StopPodSandbox for \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\"" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.118 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"559e1cf7-31cd-4748-a7af-9a9c681ae085", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934", Pod:"coredns-7c65d6cfc9-68x4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib083811551e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.118 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.118 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" iface="eth0" netns="" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.118 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.118 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.165 [INFO][5379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.166 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.166 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.179 [WARNING][5379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.179 [INFO][5379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.181 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:56.186828 containerd[1514]: 2025-09-13 01:57:56.184 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.186828 containerd[1514]: time="2025-09-13T01:57:56.186292367Z" level=info msg="TearDown network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\" successfully" Sep 13 01:57:56.186828 containerd[1514]: time="2025-09-13T01:57:56.186383795Z" level=info msg="StopPodSandbox for \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\" returns successfully" Sep 13 01:57:56.194436 containerd[1514]: time="2025-09-13T01:57:56.194359292Z" level=info msg="RemovePodSandbox for \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\"" Sep 13 01:57:56.194515 containerd[1514]: time="2025-09-13T01:57:56.194448109Z" level=info msg="Forcibly stopping sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\"" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.277 [WARNING][5393] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"559e1cf7-31cd-4748-a7af-9a9c681ae085", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"736a96acefaef31604679d1651e773d6f8bbf426d851a2cba67c14dc9dbb1934", Pod:"coredns-7c65d6cfc9-68x4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib083811551e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.278 [INFO][5393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.278 [INFO][5393] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" iface="eth0" netns="" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.278 [INFO][5393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.278 [INFO][5393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.326 [INFO][5401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.327 [INFO][5401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.327 [INFO][5401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.359 [WARNING][5401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.359 [INFO][5401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" HandleID="k8s-pod-network.67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--68x4n-eth0" Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.367 [INFO][5401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:56.379050 containerd[1514]: 2025-09-13 01:57:56.373 [INFO][5393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f" Sep 13 01:57:56.380165 containerd[1514]: time="2025-09-13T01:57:56.379123278Z" level=info msg="TearDown network for sandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\" successfully" Sep 13 01:57:56.398378 containerd[1514]: time="2025-09-13T01:57:56.398316659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:56.398588 containerd[1514]: time="2025-09-13T01:57:56.398421596Z" level=info msg="RemovePodSandbox \"67672e42747e9814f44026bfc67603fe9fa47d210fc6a2bfd024330ce5c6967f\" returns successfully" Sep 13 01:57:56.459780 containerd[1514]: time="2025-09-13T01:57:56.459512780Z" level=info msg="StopPodSandbox for \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\"" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.688 [WARNING][5415] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4497aabf-19b8-4111-a199-50d6361d00e3", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84", Pod:"coredns-7c65d6cfc9-pmxvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10c7e668088", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.689 [INFO][5415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.689 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" iface="eth0" netns="" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.689 [INFO][5415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.689 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.808 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.811 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.811 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.849 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.849 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.859 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:56.865239 containerd[1514]: 2025-09-13 01:57:56.861 [INFO][5415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:56.865239 containerd[1514]: time="2025-09-13T01:57:56.864942238Z" level=info msg="TearDown network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\" successfully" Sep 13 01:57:56.865239 containerd[1514]: time="2025-09-13T01:57:56.865025100Z" level=info msg="StopPodSandbox for \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\" returns successfully" Sep 13 01:57:56.909467 containerd[1514]: time="2025-09-13T01:57:56.909413653Z" level=info msg="RemovePodSandbox for \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\"" Sep 13 01:57:56.910450 containerd[1514]: time="2025-09-13T01:57:56.910420462Z" level=info msg="Forcibly stopping sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\"" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.113 [WARNING][5436] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"4497aabf-19b8-4111-a199-50d6361d00e3", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 56, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"f9d693cc2b998867307db5c1d1ea6937a69cde0c611029fd18129070deb8cc84", Pod:"coredns-7c65d6cfc9-pmxvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10c7e668088", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.115 [INFO][5436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.115 [INFO][5436] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" iface="eth0" netns="" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.115 [INFO][5436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.115 [INFO][5436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.253 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.254 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.254 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.267 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.267 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" HandleID="k8s-pod-network.cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Workload="srv--vx6h6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--pmxvt-eth0" Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.269 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:57.280356 containerd[1514]: 2025-09-13 01:57:57.275 [INFO][5436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00" Sep 13 01:57:57.284383 containerd[1514]: time="2025-09-13T01:57:57.281002086Z" level=info msg="TearDown network for sandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\" successfully" Sep 13 01:57:57.306256 containerd[1514]: time="2025-09-13T01:57:57.306058309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:57.307440 containerd[1514]: time="2025-09-13T01:57:57.306611901Z" level=info msg="RemovePodSandbox \"cedbc18f1ee78a52d145e74551ba520e79822c0b2d1a4863026a918bef145e00\" returns successfully" Sep 13 01:57:57.313849 containerd[1514]: time="2025-09-13T01:57:57.313719063Z" level=info msg="StopPodSandbox for \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\"" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.451 [WARNING][5482] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"0c8637e6-080f-42e2-bed2-e192627db354", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52", Pod:"goldmane-7988f88666-wjgs4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali854fbd33f88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.452 [INFO][5482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.452 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" iface="eth0" netns="" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.452 [INFO][5482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.452 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.536 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.536 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.538 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.566 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.566 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.571 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:57.584362 containerd[1514]: 2025-09-13 01:57:57.577 [INFO][5482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.585272 containerd[1514]: time="2025-09-13T01:57:57.584416373Z" level=info msg="TearDown network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\" successfully" Sep 13 01:57:57.585272 containerd[1514]: time="2025-09-13T01:57:57.584449979Z" level=info msg="StopPodSandbox for \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\" returns successfully" Sep 13 01:57:57.589815 containerd[1514]: time="2025-09-13T01:57:57.589738996Z" level=info msg="RemovePodSandbox for \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\"" Sep 13 01:57:57.590210 containerd[1514]: time="2025-09-13T01:57:57.590079661Z" level=info msg="Forcibly stopping sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\"" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.760 [WARNING][5503] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"0c8637e6-080f-42e2-bed2-e192627db354", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"88c8e705b1a96435266af27d56e7f642876e8b030ed3826c55c38dc76995fb52", Pod:"goldmane-7988f88666-wjgs4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali854fbd33f88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.761 [INFO][5503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.761 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" iface="eth0" netns="" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.761 [INFO][5503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.761 [INFO][5503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.914 [INFO][5511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.914 [INFO][5511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.914 [INFO][5511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.932 [WARNING][5511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.932 [INFO][5511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" HandleID="k8s-pod-network.56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Workload="srv--vx6h6.gb1.brightbox.com-k8s-goldmane--7988f88666--wjgs4-eth0" Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.937 [INFO][5511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:57.947590 containerd[1514]: 2025-09-13 01:57:57.943 [INFO][5503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95" Sep 13 01:57:57.947590 containerd[1514]: time="2025-09-13T01:57:57.946838200Z" level=info msg="TearDown network for sandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\" successfully" Sep 13 01:57:57.977733 containerd[1514]: time="2025-09-13T01:57:57.977659255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:57.978414 containerd[1514]: time="2025-09-13T01:57:57.978378985Z" level=info msg="RemovePodSandbox \"56cb876f9e7a68822f2da158ce981861bc526040ff693741475ff0dff0a5ac95\" returns successfully" Sep 13 01:57:57.986581 containerd[1514]: time="2025-09-13T01:57:57.986375400Z" level=info msg="StopPodSandbox for \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\"" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.240 [WARNING][5525] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96f62ef8-50ce-46be-8601-56da0c0ae5a1", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739", Pod:"csi-node-driver-z8zdz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a015f84ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.249 [INFO][5525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.249 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" iface="eth0" netns="" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.249 [INFO][5525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.250 [INFO][5525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.408 [INFO][5533] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.409 [INFO][5533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.410 [INFO][5533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.423 [WARNING][5533] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.423 [INFO][5533] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.430 [INFO][5533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:58.441608 containerd[1514]: 2025-09-13 01:57:58.435 [INFO][5525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.441608 containerd[1514]: time="2025-09-13T01:57:58.440719127Z" level=info msg="TearDown network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\" successfully" Sep 13 01:57:58.441608 containerd[1514]: time="2025-09-13T01:57:58.440757626Z" level=info msg="StopPodSandbox for \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\" returns successfully" Sep 13 01:57:58.444444 containerd[1514]: time="2025-09-13T01:57:58.442176855Z" level=info msg="RemovePodSandbox for \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\"" Sep 13 01:57:58.444444 containerd[1514]: time="2025-09-13T01:57:58.442255618Z" level=info msg="Forcibly stopping sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\"" Sep 13 01:57:58.625243 systemd[1]: run-containerd-runc-k8s.io-e6d95baddabea77ad3786f47e36febc70c1cdf2f28b915d30b918423f62a1c69-runc.aRORPa.mount: Deactivated successfully. Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.655 [WARNING][5547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"96f62ef8-50ce-46be-8601-56da0c0ae5a1", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739", Pod:"csi-node-driver-z8zdz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6a015f84ed4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.655 [INFO][5547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.655 [INFO][5547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" iface="eth0" netns="" Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.655 [INFO][5547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.656 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.760 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.763 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.766 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.797 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.797 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" HandleID="k8s-pod-network.ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Workload="srv--vx6h6.gb1.brightbox.com-k8s-csi--node--driver--z8zdz-eth0" Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.801 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:58.828216 containerd[1514]: 2025-09-13 01:57:58.808 [INFO][5547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2" Sep 13 01:57:58.828216 containerd[1514]: time="2025-09-13T01:57:58.825928864Z" level=info msg="TearDown network for sandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\" successfully" Sep 13 01:57:58.850300 containerd[1514]: time="2025-09-13T01:57:58.849783188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:58.850300 containerd[1514]: time="2025-09-13T01:57:58.849875350Z" level=info msg="RemovePodSandbox \"ff35a152c288c0ca9bebbac894d02da3d5f0c5f716c0599cd14cc6a3a51c22c2\" returns successfully" Sep 13 01:57:58.904727 containerd[1514]: time="2025-09-13T01:57:58.904665658Z" level=info msg="StopPodSandbox for \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\"" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.098 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"beece634-0084-4df9-841c-840e1e607f03", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6", Pod:"calico-apiserver-845b7845d-f9ljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali615b196e54f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.103 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.103 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" iface="eth0" netns="" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.104 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.104 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.211 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.214 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.214 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.234 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.234 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.240 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:59.256006 containerd[1514]: 2025-09-13 01:57:59.248 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.257524 containerd[1514]: time="2025-09-13T01:57:59.255433821Z" level=info msg="TearDown network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\" successfully" Sep 13 01:57:59.257524 containerd[1514]: time="2025-09-13T01:57:59.257053042Z" level=info msg="StopPodSandbox for \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\" returns successfully" Sep 13 01:57:59.279345 containerd[1514]: time="2025-09-13T01:57:59.278952209Z" level=info msg="RemovePodSandbox for \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\"" Sep 13 01:57:59.279345 containerd[1514]: time="2025-09-13T01:57:59.279010218Z" level=info msg="Forcibly stopping sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\"" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.447 [WARNING][5614] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0", GenerateName:"calico-apiserver-845b7845d-", Namespace:"calico-apiserver", SelfLink:"", UID:"beece634-0084-4df9-841c-840e1e607f03", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 1, 57, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"845b7845d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vx6h6.gb1.brightbox.com", ContainerID:"11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6", Pod:"calico-apiserver-845b7845d-f9ljs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali615b196e54f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.447 [INFO][5614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.447 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" iface="eth0" netns="" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.447 [INFO][5614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.447 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.533 [INFO][5621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.534 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.534 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.554 [WARNING][5621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.554 [INFO][5621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" HandleID="k8s-pod-network.16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Workload="srv--vx6h6.gb1.brightbox.com-k8s-calico--apiserver--845b7845d--f9ljs-eth0" Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.560 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 01:57:59.571206 containerd[1514]: 2025-09-13 01:57:59.566 [INFO][5614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa" Sep 13 01:57:59.577427 containerd[1514]: time="2025-09-13T01:57:59.571713300Z" level=info msg="TearDown network for sandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\" successfully" Sep 13 01:57:59.578876 containerd[1514]: time="2025-09-13T01:57:59.578436146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 01:57:59.578876 containerd[1514]: time="2025-09-13T01:57:59.578521836Z" level=info msg="RemovePodSandbox \"16435b3be34356353e37d3b32cf75c3cf2b1595df7c32b2634e8371b9ba22baa\" returns successfully" Sep 13 01:57:59.755806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount227711523.mount: Deactivated successfully. Sep 13 01:57:59.783924 containerd[1514]: time="2025-09-13T01:57:59.783842290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:59.787663 containerd[1514]: time="2025-09-13T01:57:59.787447594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 01:57:59.788206 containerd[1514]: time="2025-09-13T01:57:59.788007551Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:59.799393 containerd[1514]: time="2025-09-13T01:57:59.799334925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.242450377s" Sep 13 01:57:59.799722 containerd[1514]: time="2025-09-13T01:57:59.799673827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 01:57:59.800841 containerd[1514]: time="2025-09-13T01:57:59.799621117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:57:59.822995 containerd[1514]: time="2025-09-13T01:57:59.822517629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 01:57:59.886398 containerd[1514]: time="2025-09-13T01:57:59.886347138Z" level=info msg="CreateContainer within sandbox \"2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 01:57:59.956868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464936609.mount: Deactivated successfully. Sep 13 01:57:59.976580 containerd[1514]: time="2025-09-13T01:57:59.976523035Z" level=info msg="CreateContainer within sandbox \"2518bfe11287a4caed7cc9bf6608e982aef6087151f62867b2e5839f93658573\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"813439633f5f018e020761ec036b1aa3f2ffedccd2870fc5650624516c35a575\"" Sep 13 01:57:59.993576 containerd[1514]: time="2025-09-13T01:57:59.992969456Z" level=info msg="StartContainer for \"813439633f5f018e020761ec036b1aa3f2ffedccd2870fc5650624516c35a575\"" Sep 13 01:58:00.209490 systemd[1]: Started cri-containerd-813439633f5f018e020761ec036b1aa3f2ffedccd2870fc5650624516c35a575.scope - libcontainer container 813439633f5f018e020761ec036b1aa3f2ffedccd2870fc5650624516c35a575. Sep 13 01:58:00.557683 containerd[1514]: time="2025-09-13T01:58:00.557601665Z" level=info msg="StartContainer for \"813439633f5f018e020761ec036b1aa3f2ffedccd2870fc5650624516c35a575\" returns successfully" Sep 13 01:58:01.594304 kubelet[2683]: I0913 01:58:01.583905 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55489fb99-r5qhw" podStartSLOduration=4.468504069 podStartE2EDuration="23.559232512s" podCreationTimestamp="2025-09-13 01:57:38 +0000 UTC" firstStartedPulling="2025-09-13 01:57:40.72634739 +0000 UTC m=+47.411758426" lastFinishedPulling="2025-09-13 01:57:59.817075808 +0000 UTC m=+66.502486869" observedRunningTime="2025-09-13 01:58:01.519558871 +0000 UTC m=+68.204969926" watchObservedRunningTime="2025-09-13 01:58:01.559232512 +0000 UTC m=+68.244643567" Sep 13 01:58:04.633557 containerd[1514]: time="2025-09-13T01:58:04.633324992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:58:04.639300 containerd[1514]: time="2025-09-13T01:58:04.638560307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 01:58:04.641274 containerd[1514]: time="2025-09-13T01:58:04.641229726Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:58:04.645472 containerd[1514]: time="2025-09-13T01:58:04.645393842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:58:04.646350 containerd[1514]: time="2025-09-13T01:58:04.646242317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.823662061s" Sep 13 01:58:04.646350 containerd[1514]: time="2025-09-13T01:58:04.646288914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 01:58:04.666796 containerd[1514]: time="2025-09-13T01:58:04.666613023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 01:58:04.997394 containerd[1514]: time="2025-09-13T01:58:04.996634450Z" level=info msg="CreateContainer within sandbox \"b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 01:58:05.071735 containerd[1514]: time="2025-09-13T01:58:05.068618286Z" level=info msg="CreateContainer within sandbox \"b613ef11c9174d24460967e1c6ba22d5635a584f7e5560fa98e1c09ca6e058a9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a984b036fbbbd87c24474ecbbcd1ea5bea29def91ebdcd7d4ab69795fac58a9e\"" Sep 13 01:58:05.071735 containerd[1514]: time="2025-09-13T01:58:05.070299869Z" level=info msg="StartContainer for \"a984b036fbbbd87c24474ecbbcd1ea5bea29def91ebdcd7d4ab69795fac58a9e\"" Sep 13 01:58:05.207736 containerd[1514]: time="2025-09-13T01:58:05.207647681Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:58:05.211551 containerd[1514]: time="2025-09-13T01:58:05.210488409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 01:58:05.219498 containerd[1514]: time="2025-09-13T01:58:05.219382966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 552.565345ms" Sep 13 01:58:05.222218 containerd[1514]: time="2025-09-13T01:58:05.219464005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 01:58:05.225327 containerd[1514]: time="2025-09-13T01:58:05.225293450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 01:58:05.237078 containerd[1514]: time="2025-09-13T01:58:05.237030015Z" level=info msg="CreateContainer within sandbox \"11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 01:58:05.318108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370116277.mount: Deactivated successfully. Sep 13 01:58:05.325147 containerd[1514]: time="2025-09-13T01:58:05.325088747Z" level=info msg="CreateContainer within sandbox \"11d8a6fdfed05a98d6336ca630e385a458e8f645d43a4cd6cfc7f910faf5dfd6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"050807aead13f492a56d6af479a661883c3b50d0ffed97acf5985f5aa833a00d\"" Sep 13 01:58:05.326385 containerd[1514]: time="2025-09-13T01:58:05.326347239Z" level=info msg="StartContainer for \"050807aead13f492a56d6af479a661883c3b50d0ffed97acf5985f5aa833a00d\"" Sep 13 01:58:05.361499 systemd[1]: Started cri-containerd-a984b036fbbbd87c24474ecbbcd1ea5bea29def91ebdcd7d4ab69795fac58a9e.scope - libcontainer container a984b036fbbbd87c24474ecbbcd1ea5bea29def91ebdcd7d4ab69795fac58a9e. Sep 13 01:58:05.586459 systemd[1]: Started cri-containerd-050807aead13f492a56d6af479a661883c3b50d0ffed97acf5985f5aa833a00d.scope - libcontainer container 050807aead13f492a56d6af479a661883c3b50d0ffed97acf5985f5aa833a00d. Sep 13 01:58:05.652036 containerd[1514]: time="2025-09-13T01:58:05.651943348Z" level=info msg="StartContainer for \"a984b036fbbbd87c24474ecbbcd1ea5bea29def91ebdcd7d4ab69795fac58a9e\" returns successfully" Sep 13 01:58:05.790814 containerd[1514]: time="2025-09-13T01:58:05.790766998Z" level=info msg="StartContainer for \"050807aead13f492a56d6af479a661883c3b50d0ffed97acf5985f5aa833a00d\" returns successfully" Sep 13 01:58:06.574858 kubelet[2683]: I0913 01:58:06.574752 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-845b7845d-f9ljs" podStartSLOduration=37.547503532 podStartE2EDuration="57.574710375s" podCreationTimestamp="2025-09-13 01:57:09 +0000 UTC" firstStartedPulling="2025-09-13 01:57:45.196816972 +0000 UTC m=+51.882228012" lastFinishedPulling="2025-09-13 01:58:05.224023812 +0000 UTC m=+71.909434855" observedRunningTime="2025-09-13 01:58:06.564858561 +0000 UTC m=+73.250269615" watchObservedRunningTime="2025-09-13 01:58:06.574710375 +0000 UTC m=+73.260121428" Sep 13 01:58:06.589724 kubelet[2683]: I0913 01:58:06.588983 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-845b7845d-jhdbm" podStartSLOduration=37.220783127 podStartE2EDuration="57.588962477s" podCreationTimestamp="2025-09-13 01:57:09 +0000 UTC" firstStartedPulling="2025-09-13 01:57:44.282739017 +0000 UTC m=+50.968150057" lastFinishedPulling="2025-09-13 01:58:04.650918357 +0000 UTC m=+71.336329407" observedRunningTime="2025-09-13 01:58:06.588521337 +0000 UTC m=+73.273932405" watchObservedRunningTime="2025-09-13 01:58:06.588962477 +0000 UTC m=+73.274373523" Sep 13 01:58:07.602800 kubelet[2683]: I0913 01:58:07.602333 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:58:07.705689 containerd[1514]: time="2025-09-13T01:58:07.704639616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:58:07.707792 containerd[1514]: time="2025-09-13T01:58:07.707733401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 01:58:07.708733 containerd[1514]: time="2025-09-13T01:58:07.708680956Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:58:07.713863 containerd[1514]: time="2025-09-13T01:58:07.713816677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:58:07.715142 containerd[1514]: time="2025-09-13T01:58:07.714637751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.489141391s" Sep 13 01:58:07.715779 containerd[1514]: time="2025-09-13T01:58:07.715750253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 01:58:07.726979 containerd[1514]: time="2025-09-13T01:58:07.726755708Z" level=info msg="CreateContainer within sandbox \"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 01:58:07.765646 containerd[1514]: time="2025-09-13T01:58:07.765585918Z" level=info msg="CreateContainer within sandbox \"7ebe4886e4a95fcc914dea9ce84c3c3bf651874f82803b76ba98b0fc78bc6739\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7672944dca0d14a49f85f4ed3ab80719ee06d92521adf07c8294b0b3a1ac0c22\"" Sep 13 01:58:07.767438 containerd[1514]: time="2025-09-13T01:58:07.766678286Z" level=info msg="StartContainer for \"7672944dca0d14a49f85f4ed3ab80719ee06d92521adf07c8294b0b3a1ac0c22\"" Sep 13 01:58:07.845424 systemd[1]: Started cri-containerd-7672944dca0d14a49f85f4ed3ab80719ee06d92521adf07c8294b0b3a1ac0c22.scope - libcontainer container 7672944dca0d14a49f85f4ed3ab80719ee06d92521adf07c8294b0b3a1ac0c22. Sep 13 01:58:07.956855 containerd[1514]: time="2025-09-13T01:58:07.956563524Z" level=info msg="StartContainer for \"7672944dca0d14a49f85f4ed3ab80719ee06d92521adf07c8294b0b3a1ac0c22\" returns successfully" Sep 13 01:58:08.634216 kubelet[2683]: I0913 01:58:08.634141 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:58:08.669085 kubelet[2683]: I0913 01:58:08.668994 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z8zdz" podStartSLOduration=30.297567834 podStartE2EDuration="55.668960435s" podCreationTimestamp="2025-09-13 01:57:13 +0000 UTC" firstStartedPulling="2025-09-13 01:57:42.346158395 +0000 UTC m=+49.031569429" lastFinishedPulling="2025-09-13 01:58:07.71755099 +0000 UTC m=+74.402962030" observedRunningTime="2025-09-13 01:58:08.667867344 +0000 UTC m=+75.353278399" watchObservedRunningTime="2025-09-13 01:58:08.668960435 +0000 UTC m=+75.354371475" Sep 13 01:58:09.036477 kubelet[2683]: I0913 01:58:09.036304 2683 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 01:58:09.039478 kubelet[2683]: I0913 01:58:09.039445 2683 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 01:58:19.268405 kubelet[2683]: I0913 01:58:19.267933 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 01:58:24.307742 systemd[1]: Started sshd@9-10.230.52.214:22-139.178.68.195:59414.service - OpenSSH per-connection server daemon (139.178.68.195:59414). Sep 13 01:58:25.386230 sshd[5858]: Accepted publickey for core from 139.178.68.195 port 59414 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:58:25.392744 sshd[5858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:58:25.417722 systemd-logind[1495]: New session 12 of user core. Sep 13 01:58:25.425558 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 01:58:26.990151 sshd[5858]: pam_unix(sshd:session): session closed for user core Sep 13 01:58:27.023242 systemd[1]: sshd@9-10.230.52.214:22-139.178.68.195:59414.service: Deactivated successfully. Sep 13 01:58:27.036567 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:58:27.043716 systemd-logind[1495]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:58:27.047689 systemd-logind[1495]: Removed session 12. Sep 13 01:58:32.155734 systemd[1]: Started sshd@10-10.230.52.214:22-139.178.68.195:52604.service - OpenSSH per-connection server daemon (139.178.68.195:52604). Sep 13 01:58:33.189898 sshd[5918]: Accepted publickey for core from 139.178.68.195 port 52604 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:58:33.199281 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:58:33.223285 systemd-logind[1495]: New session 13 of user core. Sep 13 01:58:33.231469 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 01:58:34.387417 sshd[5918]: pam_unix(sshd:session): session closed for user core Sep 13 01:58:34.394866 systemd[1]: sshd@10-10.230.52.214:22-139.178.68.195:52604.service: Deactivated successfully. Sep 13 01:58:34.402160 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:58:34.404932 systemd-logind[1495]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:58:34.406691 systemd-logind[1495]: Removed session 13. Sep 13 01:58:39.562375 systemd[1]: Started sshd@11-10.230.52.214:22-139.178.68.195:52620.service - OpenSSH per-connection server daemon (139.178.68.195:52620). Sep 13 01:58:40.507214 sshd[5935]: Accepted publickey for core from 139.178.68.195 port 52620 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:58:40.508615 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:58:40.522252 systemd-logind[1495]: New session 14 of user core. Sep 13 01:58:40.528548 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 01:58:41.359560 sshd[5935]: pam_unix(sshd:session): session closed for user core Sep 13 01:58:41.367802 systemd[1]: sshd@11-10.230.52.214:22-139.178.68.195:52620.service: Deactivated successfully. Sep 13 01:58:41.374741 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:58:41.379273 systemd-logind[1495]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:58:41.382259 systemd-logind[1495]: Removed session 14. Sep 13 01:58:41.530617 systemd[1]: Started sshd@12-10.230.52.214:22-139.178.68.195:57834.service - OpenSSH per-connection server daemon (139.178.68.195:57834). Sep 13 01:58:42.443940 sshd[5949]: Accepted publickey for core from 139.178.68.195 port 57834 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:58:42.446478 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:58:42.455944 systemd-logind[1495]: New session 15 of user core. Sep 13 01:58:42.464436 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 01:58:43.298275 sshd[5949]: pam_unix(sshd:session): session closed for user core Sep 13 01:58:43.304061 systemd[1]: sshd@12-10.230.52.214:22-139.178.68.195:57834.service: Deactivated successfully. Sep 13 01:58:43.308837 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:58:43.313782 systemd-logind[1495]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:58:43.316464 systemd-logind[1495]: Removed session 15. Sep 13 01:58:43.459622 systemd[1]: Started sshd@13-10.230.52.214:22-139.178.68.195:57848.service - OpenSSH per-connection server daemon (139.178.68.195:57848). Sep 13 01:58:44.406780 sshd[5960]: Accepted publickey for core from 139.178.68.195 port 57848 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:58:44.409385 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:58:44.422808 systemd-logind[1495]: New session 16 of user core. Sep 13 01:58:44.430503 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 01:58:45.281797 sshd[5960]: pam_unix(sshd:session): session closed for user core Sep 13 01:58:45.303082 systemd[1]: sshd@13-10.230.52.214:22-139.178.68.195:57848.service: Deactivated successfully. Sep 13 01:58:45.311008 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:58:45.323493 systemd-logind[1495]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:58:45.328627 systemd-logind[1495]: Removed session 16. Sep 13 01:58:50.516623 systemd[1]: Started sshd@14-10.230.52.214:22-139.178.68.195:32778.service - OpenSSH per-connection server daemon (139.178.68.195:32778). Sep 13 01:58:51.543513 sshd[6020]: Accepted publickey for core from 139.178.68.195 port 32778 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:58:51.549417 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:58:51.560704 systemd-logind[1495]: New session 17 of user core. Sep 13 01:58:51.567633 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 01:58:52.825555 sshd[6020]: pam_unix(sshd:session): session closed for user core Sep 13 01:58:52.833333 systemd[1]: sshd@14-10.230.52.214:22-139.178.68.195:32778.service: Deactivated successfully. Sep 13 01:58:52.833541 systemd-logind[1495]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:58:52.838978 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:58:52.843379 systemd-logind[1495]: Removed session 17. Sep 13 01:58:57.010975 systemd[1]: run-containerd-runc-k8s.io-88a5aaa7e35645988c2cccb8d9cf61d034e2cf4cb8badc6345f1b45c4d47bf0e-runc.zlNttY.mount: Deactivated successfully. Sep 13 01:58:57.988582 systemd[1]: Started sshd@15-10.230.52.214:22-139.178.68.195:32786.service - OpenSSH per-connection server daemon (139.178.68.195:32786). Sep 13 01:58:58.988266 sshd[6074]: Accepted publickey for core from 139.178.68.195 port 32786 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:58:58.991807 sshd[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:58:59.008296 systemd-logind[1495]: New session 18 of user core. Sep 13 01:58:59.012439 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 01:59:00.439332 sshd[6074]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:00.458476 systemd[1]: sshd@15-10.230.52.214:22-139.178.68.195:32786.service: Deactivated successfully. Sep 13 01:59:00.466557 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:59:00.469870 systemd-logind[1495]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:59:00.472383 systemd-logind[1495]: Removed session 18. Sep 13 01:59:05.604325 systemd[1]: Started sshd@16-10.230.52.214:22-139.178.68.195:48978.service - OpenSSH per-connection server daemon (139.178.68.195:48978). Sep 13 01:59:06.538219 sshd[6117]: Accepted publickey for core from 139.178.68.195 port 48978 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:06.541140 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:06.552470 systemd-logind[1495]: New session 19 of user core. Sep 13 01:59:06.561947 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 01:59:07.574272 sshd[6117]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:07.581693 systemd[1]: sshd@16-10.230.52.214:22-139.178.68.195:48978.service: Deactivated successfully. Sep 13 01:59:07.588924 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:59:07.591098 systemd-logind[1495]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:59:07.596519 systemd-logind[1495]: Removed session 19. Sep 13 01:59:07.737773 systemd[1]: Started sshd@17-10.230.52.214:22-139.178.68.195:48992.service - OpenSSH per-connection server daemon (139.178.68.195:48992). Sep 13 01:59:08.673732 sshd[6130]: Accepted publickey for core from 139.178.68.195 port 48992 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:08.676662 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:08.686767 systemd-logind[1495]: New session 20 of user core. Sep 13 01:59:08.689721 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 01:59:09.827948 sshd[6130]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:09.838061 systemd[1]: sshd@17-10.230.52.214:22-139.178.68.195:48992.service: Deactivated successfully. Sep 13 01:59:09.840861 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:59:09.847058 systemd-logind[1495]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:59:09.848519 systemd-logind[1495]: Removed session 20. Sep 13 01:59:09.986263 systemd[1]: Started sshd@18-10.230.52.214:22-139.178.68.195:49004.service - OpenSSH per-connection server daemon (139.178.68.195:49004). Sep 13 01:59:10.954674 sshd[6140]: Accepted publickey for core from 139.178.68.195 port 49004 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:10.958801 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:10.969368 systemd-logind[1495]: New session 21 of user core. Sep 13 01:59:10.974435 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 01:59:15.599745 sshd[6140]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:15.722111 systemd[1]: sshd@18-10.230.52.214:22-139.178.68.195:49004.service: Deactivated successfully. Sep 13 01:59:15.734794 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:59:15.743473 systemd-logind[1495]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:59:15.813507 systemd[1]: Started sshd@19-10.230.52.214:22-139.178.68.195:46832.service - OpenSSH per-connection server daemon (139.178.68.195:46832). Sep 13 01:59:15.815362 systemd-logind[1495]: Removed session 21. Sep 13 01:59:16.981270 sshd[6169]: Accepted publickey for core from 139.178.68.195 port 46832 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:16.995675 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:17.061089 systemd-logind[1495]: New session 22 of user core. Sep 13 01:59:17.065763 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 01:59:18.820497 sshd[6169]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:18.885879 systemd[1]: sshd@19-10.230.52.214:22-139.178.68.195:46832.service: Deactivated successfully. Sep 13 01:59:18.893619 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:59:18.896611 systemd-logind[1495]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:59:18.899037 systemd-logind[1495]: Removed session 22. Sep 13 01:59:19.000721 systemd[1]: Started sshd@20-10.230.52.214:22-139.178.68.195:46834.service - OpenSSH per-connection server daemon (139.178.68.195:46834). Sep 13 01:59:20.034357 sshd[6192]: Accepted publickey for core from 139.178.68.195 port 46834 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:20.040752 sshd[6192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:20.094268 systemd-logind[1495]: New session 23 of user core. Sep 13 01:59:20.099457 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 01:59:22.068614 sshd[6192]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:22.089984 systemd[1]: sshd@20-10.230.52.214:22-139.178.68.195:46834.service: Deactivated successfully. Sep 13 01:59:22.112047 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:59:22.114886 systemd-logind[1495]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:59:22.117721 systemd-logind[1495]: Removed session 23. Sep 13 01:59:27.399575 systemd[1]: Started sshd@21-10.230.52.214:22-139.178.68.195:39012.service - OpenSSH per-connection server daemon (139.178.68.195:39012). Sep 13 01:59:28.445259 sshd[6249]: Accepted publickey for core from 139.178.68.195 port 39012 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:28.452862 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:28.467437 systemd-logind[1495]: New session 24 of user core. Sep 13 01:59:28.474897 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 01:59:30.137799 sshd[6249]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:30.171036 systemd[1]: sshd@21-10.230.52.214:22-139.178.68.195:39012.service: Deactivated successfully. Sep 13 01:59:30.181894 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:59:30.189015 systemd-logind[1495]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:59:30.194074 systemd-logind[1495]: Removed session 24. Sep 13 01:59:35.357703 systemd[1]: Started sshd@22-10.230.52.214:22-139.178.68.195:50172.service - OpenSSH per-connection server daemon (139.178.68.195:50172). Sep 13 01:59:36.352296 sshd[6293]: Accepted publickey for core from 139.178.68.195 port 50172 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:36.354619 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:36.369603 systemd-logind[1495]: New session 25 of user core. Sep 13 01:59:36.374474 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 01:59:37.588272 sshd[6293]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:37.597460 systemd[1]: sshd@22-10.230.52.214:22-139.178.68.195:50172.service: Deactivated successfully. Sep 13 01:59:37.601094 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:59:37.614440 systemd-logind[1495]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:59:37.618609 systemd-logind[1495]: Removed session 25. Sep 13 01:59:42.758228 systemd[1]: Started sshd@23-10.230.52.214:22-139.178.68.195:52212.service - OpenSSH per-connection server daemon (139.178.68.195:52212). Sep 13 01:59:43.782552 sshd[6305]: Accepted publickey for core from 139.178.68.195 port 52212 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:43.790397 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:43.803532 systemd-logind[1495]: New session 26 of user core. Sep 13 01:59:43.807462 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 01:59:45.097660 sshd[6305]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:45.115073 systemd[1]: sshd@23-10.230.52.214:22-139.178.68.195:52212.service: Deactivated successfully. Sep 13 01:59:45.123168 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:59:45.130391 systemd-logind[1495]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:59:45.133919 systemd-logind[1495]: Removed session 26. Sep 13 01:59:50.321480 systemd[1]: Started sshd@24-10.230.52.214:22-139.178.68.195:36542.service - OpenSSH per-connection server daemon (139.178.68.195:36542). Sep 13 01:59:51.361079 sshd[6359]: Accepted publickey for core from 139.178.68.195 port 36542 ssh2: RSA SHA256:dIJs8AGfYNpN1Jw559jntP6aURAguWX2tmPUUD2xz0k Sep 13 01:59:51.365138 sshd[6359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:59:51.378943 systemd-logind[1495]: New session 27 of user core. Sep 13 01:59:51.388443 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 01:59:52.677850 sshd[6359]: pam_unix(sshd:session): session closed for user core Sep 13 01:59:52.685047 systemd[1]: sshd@24-10.230.52.214:22-139.178.68.195:36542.service: Deactivated successfully. Sep 13 01:59:52.694204 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 01:59:52.695641 systemd-logind[1495]: Session 27 logged out. Waiting for processes to exit. Sep 13 01:59:52.697520 systemd-logind[1495]: Removed session 27.