Aug 13 09:01:47.031626 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 09:01:47.031662 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 09:01:47.031675 kernel: BIOS-provided physical RAM map: Aug 13 09:01:47.031691 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 09:01:47.031701 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 09:01:47.031711 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 09:01:47.031722 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Aug 13 09:01:47.031733 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Aug 13 09:01:47.031743 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 09:01:47.031754 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 09:01:47.031764 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 09:01:47.031775 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 09:01:47.031790 kernel: NX (Execute Disable) protection: active Aug 13 09:01:47.031801 kernel: APIC: Static calls initialized Aug 13 09:01:47.031813 kernel: SMBIOS 2.8 present. Aug 13 09:01:47.031825 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Aug 13 09:01:47.031837 kernel: Hypervisor detected: KVM Aug 13 09:01:47.031852 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 09:01:47.031864 kernel: kvm-clock: using sched offset of 4351288584 cycles Aug 13 09:01:47.031876 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 09:01:47.031888 kernel: tsc: Detected 2499.998 MHz processor Aug 13 09:01:47.031900 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 09:01:47.031912 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 09:01:47.031923 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Aug 13 09:01:47.031934 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 09:01:47.031946 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 09:01:47.031962 kernel: Using GB pages for direct mapping Aug 13 09:01:47.031974 kernel: ACPI: Early table checksum verification disabled Aug 13 09:01:47.031985 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Aug 13 09:01:47.031996 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 09:01:47.032008 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 09:01:47.032019 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 09:01:47.032031 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Aug 13 09:01:47.032042 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 09:01:47.032054 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 09:01:47.032855 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 09:01:47.032873 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 09:01:47.032885 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Aug 13 09:01:47.032897 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Aug 13 09:01:47.032909 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Aug 13 09:01:47.032930 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Aug 13 09:01:47.032943 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Aug 13 09:01:47.032959 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Aug 13 09:01:47.032972 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Aug 13 09:01:47.032984 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 09:01:47.032996 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 09:01:47.033008 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 09:01:47.033020 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Aug 13 09:01:47.033032 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 09:01:47.033044 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Aug 13 09:01:47.033060 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 09:01:47.033087 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Aug 13 09:01:47.033100 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 09:01:47.033112 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Aug 13 09:01:47.033124 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 09:01:47.033136 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Aug 13 09:01:47.033148 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 09:01:47.033160 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Aug 13 09:01:47.033172 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 09:01:47.033190 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Aug 13 09:01:47.033202 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 09:01:47.033215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 09:01:47.033227 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Aug 13 09:01:47.033239 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Aug 13 09:01:47.033251 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Aug 13 09:01:47.033264 kernel: Zone ranges: Aug 13 09:01:47.033276 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 09:01:47.033289 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Aug 13 09:01:47.033305 kernel: Normal empty Aug 13 09:01:47.033317 kernel: Movable zone start for each node Aug 13 09:01:47.033329 kernel: Early memory node ranges Aug 13 09:01:47.033341 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 09:01:47.033353 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Aug 13 09:01:47.033365 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Aug 13 09:01:47.033378 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 09:01:47.033390 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 09:01:47.033416 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Aug 13 09:01:47.033429 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 09:01:47.033446 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 09:01:47.033459 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 09:01:47.033471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 09:01:47.033483 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 09:01:47.033495 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 09:01:47.033507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 09:01:47.033519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 09:01:47.033531 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 09:01:47.033543 kernel: TSC deadline timer available Aug 13 09:01:47.033560 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Aug 13 09:01:47.033572 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 09:01:47.033585 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 09:01:47.033597 kernel: Booting paravirtualized kernel on KVM Aug 13 09:01:47.033609 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 09:01:47.033621 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Aug 13 09:01:47.033634 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 09:01:47.033646 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 09:01:47.033658 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 09:01:47.033674 kernel: kvm-guest: PV spinlocks enabled Aug 13 09:01:47.033687 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 09:01:47.033700 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 09:01:47.033714 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 09:01:47.033726 kernel: random: crng init done Aug 13 09:01:47.033738 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 09:01:47.033750 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 09:01:47.033762 kernel: Fallback order for Node 0: 0 Aug 13 09:01:47.033779 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Aug 13 09:01:47.033791 kernel: Policy zone: DMA32 Aug 13 09:01:47.033803 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 09:01:47.033815 kernel: software IO TLB: area num 16. Aug 13 09:01:47.033828 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 194824K reserved, 0K cma-reserved) Aug 13 09:01:47.033840 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 09:01:47.033852 kernel: Kernel/User page tables isolation: enabled Aug 13 09:01:47.033864 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 09:01:47.033876 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 09:01:47.033893 kernel: Dynamic Preempt: voluntary Aug 13 09:01:47.033906 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 09:01:47.033919 kernel: rcu: RCU event tracing is enabled. Aug 13 09:01:47.033931 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 09:01:47.033944 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 09:01:47.033968 kernel: Rude variant of Tasks RCU enabled. Aug 13 09:01:47.033985 kernel: Tracing variant of Tasks RCU enabled. Aug 13 09:01:47.033998 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 09:01:47.034011 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 09:01:47.034023 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Aug 13 09:01:47.034036 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 09:01:47.034049 kernel: Console: colour VGA+ 80x25 Aug 13 09:01:47.034066 kernel: printk: console [tty0] enabled Aug 13 09:01:47.035924 kernel: printk: console [ttyS0] enabled Aug 13 09:01:47.035937 kernel: ACPI: Core revision 20230628 Aug 13 09:01:47.035950 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 09:01:47.035963 kernel: x2apic enabled Aug 13 09:01:47.035984 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 09:01:47.035998 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 13 09:01:47.036011 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Aug 13 09:01:47.036024 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 09:01:47.036037 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 09:01:47.036049 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 09:01:47.036062 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 09:01:47.036158 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 09:01:47.036172 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 09:01:47.036192 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 09:01:47.036205 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 09:01:47.036218 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 09:01:47.036230 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 09:01:47.036243 kernel: MMIO Stale Data: Unknown: No mitigations Aug 13 09:01:47.036256 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 09:01:47.036269 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 09:01:47.036282 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 09:01:47.036295 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 09:01:47.036307 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 09:01:47.036320 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 09:01:47.036337 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 09:01:47.036350 kernel: Freeing SMP alternatives memory: 32K Aug 13 09:01:47.036363 kernel: pid_max: default: 32768 minimum: 301 Aug 13 09:01:47.036376 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 09:01:47.036389 kernel: landlock: Up and running. Aug 13 09:01:47.036414 kernel: SELinux: Initializing. Aug 13 09:01:47.036428 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 09:01:47.036440 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 09:01:47.036453 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Aug 13 09:01:47.036467 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 09:01:47.036480 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 09:01:47.036498 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 09:01:47.036512 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Aug 13 09:01:47.036525 kernel: signal: max sigframe size: 1776 Aug 13 09:01:47.036537 kernel: rcu: Hierarchical SRCU implementation. Aug 13 09:01:47.036551 kernel: rcu: Max phase no-delay instances is 400. Aug 13 09:01:47.036564 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 09:01:47.036576 kernel: smp: Bringing up secondary CPUs ... Aug 13 09:01:47.036589 kernel: smpboot: x86: Booting SMP configuration: Aug 13 09:01:47.036602 kernel: .... node #0, CPUs: #1 Aug 13 09:01:47.036619 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 09:01:47.036632 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 09:01:47.036645 kernel: smpboot: Max logical packages: 16 Aug 13 09:01:47.036658 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Aug 13 09:01:47.036671 kernel: devtmpfs: initialized Aug 13 09:01:47.036683 kernel: x86/mm: Memory block size: 128MB Aug 13 09:01:47.036696 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 09:01:47.036709 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 09:01:47.036722 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 09:01:47.036739 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 09:01:47.036752 kernel: audit: initializing netlink subsys (disabled) Aug 13 09:01:47.036765 kernel: audit: type=2000 audit(1755075705.152:1): state=initialized audit_enabled=0 res=1 Aug 13 09:01:47.036778 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 09:01:47.036791 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 09:01:47.036803 kernel: cpuidle: using governor menu Aug 13 09:01:47.036816 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 09:01:47.036829 kernel: dca service started, version 1.12.1 Aug 13 09:01:47.036842 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 09:01:47.036859 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 09:01:47.036872 kernel: PCI: Using configuration type 1 for base access Aug 13 09:01:47.036885 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 09:01:47.036898 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 09:01:47.036911 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 09:01:47.036924 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 09:01:47.036937 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 09:01:47.036950 kernel: ACPI: Added _OSI(Module Device) Aug 13 09:01:47.036962 kernel: ACPI: Added _OSI(Processor Device) Aug 13 09:01:47.036979 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 09:01:47.036992 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 09:01:47.037005 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 09:01:47.037018 kernel: ACPI: Interpreter enabled Aug 13 09:01:47.037031 kernel: ACPI: PM: (supports S0 S5) Aug 13 09:01:47.037043 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 09:01:47.037056 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 09:01:47.037087 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 09:01:47.037102 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 09:01:47.037121 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 09:01:47.037380 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 09:01:47.037584 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 09:01:47.037758 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 09:01:47.037778 kernel: PCI host bridge to bus 0000:00 Aug 13 09:01:47.037967 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 09:01:47.039646 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 09:01:47.039833 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 09:01:47.039994 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 09:01:47.040243 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 09:01:47.040410 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Aug 13 09:01:47.040567 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 09:01:47.040765 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 09:01:47.040959 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Aug 13 09:01:47.042194 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Aug 13 09:01:47.042375 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Aug 13 09:01:47.042565 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Aug 13 09:01:47.042735 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 09:01:47.042918 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.043119 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Aug 13 09:01:47.043315 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.043502 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Aug 13 09:01:47.043686 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.043857 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Aug 13 09:01:47.044037 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.045308 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Aug 13 09:01:47.045528 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.045707 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Aug 13 09:01:47.045890 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.046062 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Aug 13 09:01:47.046285 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.046479 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Aug 13 09:01:47.046673 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 09:01:47.046844 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Aug 13 09:01:47.047021 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 09:01:47.049242 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 09:01:47.049444 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Aug 13 09:01:47.049620 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Aug 13 09:01:47.049803 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Aug 13 09:01:47.050001 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 13 09:01:47.053212 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 09:01:47.053402 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Aug 13 09:01:47.053578 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Aug 13 09:01:47.053755 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 09:01:47.053924 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 09:01:47.054121 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 09:01:47.054303 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Aug 13 09:01:47.054490 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Aug 13 09:01:47.054667 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 09:01:47.054838 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 09:01:47.055027 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Aug 13 09:01:47.055227 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Aug 13 09:01:47.055472 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 09:01:47.055653 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 09:01:47.055826 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 09:01:47.056023 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 09:01:47.058211 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Aug 13 09:01:47.058422 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Aug 13 09:01:47.058612 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 09:01:47.058789 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 09:01:47.058975 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 09:01:47.059228 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Aug 13 09:01:47.059417 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 09:01:47.059587 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 09:01:47.059754 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 09:01:47.059950 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 09:01:47.061001 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Aug 13 09:01:47.062009 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 09:01:47.062209 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 09:01:47.062378 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 09:01:47.062563 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 09:01:47.062735 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 09:01:47.062905 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 09:01:47.063120 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 09:01:47.063295 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 09:01:47.063483 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 09:01:47.063659 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 09:01:47.063832 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 09:01:47.064004 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 09:01:47.064208 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 09:01:47.064377 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 09:01:47.064567 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 09:01:47.064737 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 09:01:47.064904 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 09:01:47.065134 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 09:01:47.065157 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 09:01:47.065170 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 09:01:47.065184 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 09:01:47.065197 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 09:01:47.065218 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 09:01:47.065231 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 09:01:47.065244 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 09:01:47.065257 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 09:01:47.065270 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 09:01:47.065283 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 09:01:47.065296 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 09:01:47.065309 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 09:01:47.065322 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 09:01:47.065340 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 09:01:47.065353 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 09:01:47.065366 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 09:01:47.065379 kernel: iommu: Default domain type: Translated Aug 13 09:01:47.065403 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 09:01:47.065418 kernel: PCI: Using ACPI for IRQ routing Aug 13 09:01:47.065431 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 09:01:47.065444 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 09:01:47.065456 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Aug 13 09:01:47.065632 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 09:01:47.065800 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 09:01:47.065965 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 09:01:47.065985 kernel: vgaarb: loaded Aug 13 09:01:47.065998 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 09:01:47.066012 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 09:01:47.066025 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 09:01:47.066038 kernel: pnp: PnP ACPI init Aug 13 09:01:47.066241 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 09:01:47.066270 kernel: pnp: PnP ACPI: found 5 devices Aug 13 09:01:47.066284 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 09:01:47.066297 kernel: NET: Registered PF_INET protocol family Aug 13 09:01:47.066310 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 09:01:47.066323 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 09:01:47.066336 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 09:01:47.066349 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 09:01:47.066362 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 09:01:47.066380 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 09:01:47.066403 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 09:01:47.066418 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 09:01:47.066431 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 09:01:47.066444 kernel: NET: Registered PF_XDP protocol family Aug 13 09:01:47.066612 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Aug 13 09:01:47.066783 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 09:01:47.066951 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 09:01:47.067145 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 09:01:47.067315 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 09:01:47.067499 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 09:01:47.067668 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 09:01:47.067835 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 09:01:47.068034 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 09:01:47.068249 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 09:01:47.068455 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 09:01:47.068625 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 09:01:47.068790 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 09:01:47.068956 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 09:01:47.069141 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 09:01:47.069311 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 13 09:01:47.069517 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 09:01:47.069721 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 09:01:47.069901 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 09:01:47.070085 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 13 09:01:47.070260 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 09:01:47.070446 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 09:01:47.070619 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 09:01:47.070790 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 13 09:01:47.070962 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 09:01:47.071161 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 09:01:47.071337 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 09:01:47.071524 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 13 09:01:47.071697 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 09:01:47.071869 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 09:01:47.072049 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 09:01:47.072259 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 13 09:01:47.072446 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 09:01:47.072620 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 09:01:47.072792 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 09:01:47.072963 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 13 09:01:47.073152 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 09:01:47.073325 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 09:01:47.073510 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 09:01:47.073683 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 13 09:01:47.073864 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 09:01:47.074035 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 09:01:47.074238 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 09:01:47.074436 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 13 09:01:47.074614 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 09:01:47.074819 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 09:01:47.074991 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 09:01:47.075226 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 13 09:01:47.075409 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 09:01:47.075581 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 09:01:47.075743 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 09:01:47.075895 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 09:01:47.076047 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 09:01:47.076244 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 09:01:47.076410 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 09:01:47.076564 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Aug 13 09:01:47.076743 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 13 09:01:47.076912 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Aug 13 09:01:47.077149 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 09:01:47.077324 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Aug 13 09:01:47.077518 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Aug 13 09:01:47.077678 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Aug 13 09:01:47.077841 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 09:01:47.078019 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Aug 13 09:01:47.078213 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Aug 13 09:01:47.078373 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 09:01:47.078560 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Aug 13 09:01:47.078734 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Aug 13 09:01:47.078896 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 09:01:47.079111 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Aug 13 09:01:47.079286 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Aug 13 09:01:47.079461 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 09:01:47.079629 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Aug 13 09:01:47.079787 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Aug 13 09:01:47.079954 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 09:01:47.080455 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Aug 13 09:01:47.080623 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Aug 13 09:01:47.080783 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 09:01:47.080951 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Aug 13 09:01:47.081179 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Aug 13 09:01:47.081349 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 09:01:47.081371 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 09:01:47.081386 kernel: PCI: CLS 0 bytes, default 64 Aug 13 09:01:47.081413 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 09:01:47.081427 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Aug 13 09:01:47.081449 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 09:01:47.081463 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Aug 13 09:01:47.081477 kernel: Initialise system trusted keyrings Aug 13 09:01:47.081490 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 09:01:47.081508 kernel: Key type asymmetric registered Aug 13 09:01:47.081522 kernel: Asymmetric key parser 'x509' registered Aug 13 09:01:47.081535 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 09:01:47.081549 kernel: io scheduler mq-deadline registered Aug 13 09:01:47.081562 kernel: io scheduler kyber registered Aug 13 09:01:47.081576 kernel: io scheduler bfq registered Aug 13 09:01:47.081750 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Aug 13 09:01:47.081924 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Aug 13 09:01:47.082139 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.082325 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Aug 13 09:01:47.082509 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Aug 13 09:01:47.082680 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.082850 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Aug 13 09:01:47.083018 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Aug 13 09:01:47.083330 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.083632 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Aug 13 09:01:47.083807 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Aug 13 09:01:47.084115 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.084290 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Aug 13 09:01:47.084476 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Aug 13 09:01:47.084657 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.084870 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Aug 13 09:01:47.085048 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Aug 13 09:01:47.085295 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.085484 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Aug 13 09:01:47.085654 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Aug 13 09:01:47.085823 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.086003 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Aug 13 09:01:47.086204 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Aug 13 09:01:47.086375 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 09:01:47.086407 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 09:01:47.086422 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 09:01:47.086437 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 09:01:47.086459 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 09:01:47.086473 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 09:01:47.086487 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 09:01:47.086501 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 09:01:47.086514 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 09:01:47.086529 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 09:01:47.086700 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 09:01:47.086865 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 09:01:47.087057 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T09:01:46 UTC (1755075706) Aug 13 09:01:47.087244 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 09:01:47.087265 kernel: intel_pstate: CPU model not supported Aug 13 09:01:47.087279 kernel: NET: Registered PF_INET6 protocol family Aug 13 09:01:47.087292 kernel: Segment Routing with IPv6 Aug 13 09:01:47.087306 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 09:01:47.087319 kernel: NET: Registered PF_PACKET protocol family Aug 13 09:01:47.087333 kernel: Key type dns_resolver registered Aug 13 09:01:47.087346 kernel: IPI shorthand broadcast: enabled Aug 13 09:01:47.087368 kernel: sched_clock: Marking stable (1162004396, 235653516)->(1623618560, -225960648) Aug 13 09:01:47.087382 kernel: registered taskstats version 1 Aug 13 09:01:47.087407 kernel: Loading compiled-in X.509 certificates Aug 13 09:01:47.087421 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 09:01:47.087435 kernel: Key type .fscrypt registered Aug 13 09:01:47.087454 kernel: Key type fscrypt-provisioning registered Aug 13 09:01:47.087468 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 09:01:47.087482 kernel: ima: Allocated hash algorithm: sha1 Aug 13 09:01:47.087495 kernel: ima: No architecture policies found Aug 13 09:01:47.087514 kernel: clk: Disabling unused clocks Aug 13 09:01:47.087528 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 09:01:47.087541 kernel: Write protecting the kernel read-only data: 36864k Aug 13 09:01:47.087555 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 09:01:47.087569 kernel: Run /init as init process Aug 13 09:01:47.087582 kernel: with arguments: Aug 13 09:01:47.087595 kernel: /init Aug 13 09:01:47.087609 kernel: with environment: Aug 13 09:01:47.087622 kernel: HOME=/ Aug 13 09:01:47.087640 kernel: TERM=linux Aug 13 09:01:47.087653 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 09:01:47.087670 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 09:01:47.087687 systemd[1]: Detected virtualization kvm. Aug 13 09:01:47.087702 systemd[1]: Detected architecture x86-64. Aug 13 09:01:47.087716 systemd[1]: Running in initrd. Aug 13 09:01:47.087730 systemd[1]: No hostname configured, using default hostname. Aug 13 09:01:47.087744 systemd[1]: Hostname set to . Aug 13 09:01:47.087764 systemd[1]: Initializing machine ID from VM UUID. Aug 13 09:01:47.087778 systemd[1]: Queued start job for default target initrd.target. Aug 13 09:01:47.087793 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 09:01:47.087807 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 09:01:47.087823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 09:01:47.087838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 09:01:47.087852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 09:01:47.087872 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 09:01:47.087889 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 09:01:47.087904 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 09:01:47.087919 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 09:01:47.087933 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 09:01:47.087948 systemd[1]: Reached target paths.target - Path Units. Aug 13 09:01:47.087962 systemd[1]: Reached target slices.target - Slice Units. Aug 13 09:01:47.087982 systemd[1]: Reached target swap.target - Swaps. Aug 13 09:01:47.087996 systemd[1]: Reached target timers.target - Timer Units. Aug 13 09:01:47.088011 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 09:01:47.088025 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 09:01:47.088040 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 09:01:47.088054 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 09:01:47.088110 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 09:01:47.088129 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 09:01:47.088144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 09:01:47.088165 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 09:01:47.088181 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 09:01:47.088195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 09:01:47.088210 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 09:01:47.088225 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 09:01:47.088239 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 09:01:47.088254 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 09:01:47.088268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 09:01:47.088287 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 09:01:47.088357 systemd-journald[200]: Collecting audit messages is disabled. Aug 13 09:01:47.088410 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 09:01:47.088426 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 09:01:47.088448 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 09:01:47.088463 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 09:01:47.088478 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 09:01:47.088493 systemd-journald[200]: Journal started Aug 13 09:01:47.088524 systemd-journald[200]: Runtime Journal (/run/log/journal/8b019718f13c4fb0932d3af1d02f367f) is 4.7M, max 38.0M, 33.2M free. Aug 13 09:01:47.025548 systemd-modules-load[201]: Inserted module 'overlay' Aug 13 09:01:47.149744 kernel: Bridge firewalling registered Aug 13 09:01:47.149776 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 09:01:47.090625 systemd-modules-load[201]: Inserted module 'br_netfilter' Aug 13 09:01:47.150797 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 09:01:47.152062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 09:01:47.162457 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 09:01:47.164245 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 09:01:47.168632 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 09:01:47.181373 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 09:01:47.195539 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 09:01:47.202619 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 09:01:47.204939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 09:01:47.207830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 09:01:47.216937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 09:01:47.222321 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 09:01:47.231113 dracut-cmdline[232]: dracut-dracut-053 Aug 13 09:01:47.234143 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 09:01:47.277060 systemd-resolved[240]: Positive Trust Anchors: Aug 13 09:01:47.277098 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 09:01:47.277146 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 09:01:47.282684 systemd-resolved[240]: Defaulting to hostname 'linux'. Aug 13 09:01:47.284988 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 09:01:47.287270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 09:01:47.354138 kernel: SCSI subsystem initialized Aug 13 09:01:47.366116 kernel: Loading iSCSI transport class v2.0-870. Aug 13 09:01:47.379144 kernel: iscsi: registered transport (tcp) Aug 13 09:01:47.406335 kernel: iscsi: registered transport (qla4xxx) Aug 13 09:01:47.406433 kernel: QLogic iSCSI HBA Driver Aug 13 09:01:47.461951 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 09:01:47.473478 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 09:01:47.506113 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 09:01:47.506217 kernel: device-mapper: uevent: version 1.0.3 Aug 13 09:01:47.506238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 09:01:47.558165 kernel: raid6: sse2x4 gen() 13958 MB/s Aug 13 09:01:47.574117 kernel: raid6: sse2x2 gen() 9644 MB/s Aug 13 09:01:47.593864 kernel: raid6: sse2x1 gen() 10110 MB/s Aug 13 09:01:47.594012 kernel: raid6: using algorithm sse2x4 gen() 13958 MB/s Aug 13 09:01:47.612789 kernel: raid6: .... xor() 7557 MB/s, rmw enabled Aug 13 09:01:47.612878 kernel: raid6: using ssse3x2 recovery algorithm Aug 13 09:01:47.640103 kernel: xor: automatically using best checksumming function avx Aug 13 09:01:47.836144 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 09:01:47.851399 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 09:01:47.858306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 09:01:47.887604 systemd-udevd[421]: Using default interface naming scheme 'v255'. Aug 13 09:01:47.894623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 09:01:47.904641 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 09:01:47.924968 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Aug 13 09:01:47.963607 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 09:01:47.969309 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 09:01:48.099245 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 09:01:48.110250 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 09:01:48.133772 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 09:01:48.139696 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 09:01:48.140494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 09:01:48.143348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 09:01:48.153232 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 09:01:48.174457 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 09:01:48.231304 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Aug 13 09:01:48.235087 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 09:01:48.250674 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 09:01:48.264098 kernel: libata version 3.00 loaded. Aug 13 09:01:48.274123 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 09:01:48.274430 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 09:01:48.277553 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 09:01:48.277781 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 09:01:48.282142 kernel: scsi host0: ahci Aug 13 09:01:48.282413 kernel: scsi host1: ahci Aug 13 09:01:48.284090 kernel: scsi host2: ahci Aug 13 09:01:48.286601 kernel: scsi host3: ahci Aug 13 09:01:48.286831 kernel: scsi host4: ahci Aug 13 09:01:48.296874 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 09:01:48.296934 kernel: GPT:17805311 != 125829119 Aug 13 09:01:48.296968 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 09:01:48.296986 kernel: GPT:17805311 != 125829119 Aug 13 09:01:48.297002 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 09:01:48.297020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 09:01:48.298001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 09:01:48.304864 kernel: AVX version of gcm_enc/dec engaged. Aug 13 09:01:48.304906 kernel: AES CTR mode by8 optimization enabled Aug 13 09:01:48.298211 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 09:01:48.308169 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 09:01:48.349445 kernel: ACPI: bus type USB registered Aug 13 09:01:48.349491 kernel: usbcore: registered new interface driver usbfs Aug 13 09:01:48.349512 kernel: usbcore: registered new interface driver hub Aug 13 09:01:48.349531 kernel: usbcore: registered new device driver usb Aug 13 09:01:48.349549 kernel: scsi host5: ahci Aug 13 09:01:48.349796 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 Aug 13 09:01:48.349818 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 Aug 13 09:01:48.349836 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 Aug 13 09:01:48.349854 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 Aug 13 09:01:48.349880 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 Aug 13 09:01:48.349899 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 Aug 13 09:01:48.308906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 09:01:48.309182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 09:01:48.309982 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 09:01:48.317657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 09:01:48.389143 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Aug 13 09:01:48.401097 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (465) Aug 13 09:01:48.414744 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 09:01:48.470929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 09:01:48.478786 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 09:01:48.485936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 09:01:48.491787 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 09:01:48.492651 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 09:01:48.502303 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 09:01:48.507244 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 09:01:48.514107 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 09:01:48.514155 disk-uuid[557]: Primary Header is updated. Aug 13 09:01:48.514155 disk-uuid[557]: Secondary Entries is updated. Aug 13 09:01:48.514155 disk-uuid[557]: Secondary Header is updated. Aug 13 09:01:48.524101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 09:01:48.535180 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 09:01:48.539278 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 09:01:48.625131 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 09:01:48.634852 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 09:01:48.634910 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 09:01:48.637091 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 09:01:48.637136 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 09:01:48.639387 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 09:01:48.661111 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 09:01:48.667100 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Aug 13 09:01:48.672122 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 09:01:48.677116 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 09:01:48.689093 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Aug 13 09:01:48.689427 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Aug 13 09:01:48.689649 kernel: hub 1-0:1.0: USB hub found Aug 13 09:01:48.693091 kernel: hub 1-0:1.0: 4 ports detected Aug 13 09:01:48.693326 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 09:01:48.695613 kernel: hub 2-0:1.0: USB hub found Aug 13 09:01:48.698119 kernel: hub 2-0:1.0: 4 ports detected Aug 13 09:01:48.933210 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 09:01:49.075111 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 09:01:49.081168 kernel: usbcore: registered new interface driver usbhid Aug 13 09:01:49.081223 kernel: usbhid: USB HID core driver Aug 13 09:01:49.088544 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Aug 13 09:01:49.088588 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Aug 13 09:01:49.536121 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 09:01:49.538788 disk-uuid[558]: The operation has completed successfully. Aug 13 09:01:49.588204 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 09:01:49.588385 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 09:01:49.612369 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 09:01:49.619101 sh[587]: Success Aug 13 09:01:49.636885 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Aug 13 09:01:49.706320 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 09:01:49.709741 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 09:01:49.711403 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 09:01:49.734105 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 09:01:49.734171 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 09:01:49.734203 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 09:01:49.735643 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 09:01:49.738865 kernel: BTRFS info (device dm-0): using free space tree Aug 13 09:01:49.747496 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 09:01:49.748966 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 09:01:49.756256 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 09:01:49.759983 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 09:01:49.775112 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 09:01:49.778821 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 09:01:49.778857 kernel: BTRFS info (device vda6): using free space tree Aug 13 09:01:49.784100 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 09:01:49.797162 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 09:01:49.799824 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 09:01:49.808195 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 09:01:49.817303 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 09:01:49.892334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 09:01:49.908261 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 09:01:49.933389 systemd-networkd[769]: lo: Link UP Aug 13 09:01:49.933402 systemd-networkd[769]: lo: Gained carrier Aug 13 09:01:49.939767 systemd-networkd[769]: Enumeration completed Aug 13 09:01:49.942395 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 09:01:49.942401 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 09:01:49.944323 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 09:01:49.947592 systemd-networkd[769]: eth0: Link UP Aug 13 09:01:49.947599 systemd-networkd[769]: eth0: Gained carrier Aug 13 09:01:49.947611 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 09:01:49.948859 systemd[1]: Reached target network.target - Network. Aug 13 09:01:49.967164 systemd-networkd[769]: eth0: DHCPv4 address 10.230.18.154/30, gateway 10.230.18.153 acquired from 10.230.18.153 Aug 13 09:01:49.977808 ignition[687]: Ignition 2.19.0 Aug 13 09:01:49.977833 ignition[687]: Stage: fetch-offline Aug 13 09:01:49.977913 ignition[687]: no configs at "/usr/lib/ignition/base.d" Aug 13 09:01:49.980049 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 09:01:49.977937 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 09:01:49.978132 ignition[687]: parsed url from cmdline: "" Aug 13 09:01:49.978145 ignition[687]: no config URL provided Aug 13 09:01:49.978155 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 09:01:49.978171 ignition[687]: no config at "/usr/lib/ignition/user.ign" Aug 13 09:01:49.978180 ignition[687]: failed to fetch config: resource requires networking Aug 13 09:01:49.978471 ignition[687]: Ignition finished successfully Aug 13 09:01:49.999498 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 09:01:50.019201 ignition[778]: Ignition 2.19.0 Aug 13 09:01:50.019223 ignition[778]: Stage: fetch Aug 13 09:01:50.019561 ignition[778]: no configs at "/usr/lib/ignition/base.d" Aug 13 09:01:50.019583 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 09:01:50.019722 ignition[778]: parsed url from cmdline: "" Aug 13 09:01:50.019729 ignition[778]: no config URL provided Aug 13 09:01:50.019739 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 09:01:50.019755 ignition[778]: no config at "/usr/lib/ignition/user.ign" Aug 13 09:01:50.019978 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Aug 13 09:01:50.020042 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Aug 13 09:01:50.020123 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Aug 13 09:01:50.034681 ignition[778]: GET result: OK Aug 13 09:01:50.035665 ignition[778]: parsing config with SHA512: f0f033f682551d3e891c98496453226ae2f77b511bd89536bc82eb0f810e61a56329b6d8996e17cb25aa9356badbdcf4761b99982973dded876f59550a32f5ed Aug 13 09:01:50.042819 unknown[778]: fetched base config from "system" Aug 13 09:01:50.043906 unknown[778]: fetched base config from "system" Aug 13 09:01:50.044399 ignition[778]: fetch: fetch complete Aug 13 09:01:50.043922 unknown[778]: fetched user config from "openstack" Aug 13 09:01:50.044409 ignition[778]: fetch: fetch passed Aug 13 09:01:50.048129 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 09:01:50.044482 ignition[778]: Ignition finished successfully Aug 13 09:01:50.066305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 09:01:50.086174 ignition[784]: Ignition 2.19.0 Aug 13 09:01:50.086192 ignition[784]: Stage: kargs Aug 13 09:01:50.086438 ignition[784]: no configs at "/usr/lib/ignition/base.d" Aug 13 09:01:50.086458 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 09:01:50.089705 ignition[784]: kargs: kargs passed Aug 13 09:01:50.091003 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 09:01:50.089778 ignition[784]: Ignition finished successfully Aug 13 09:01:50.097296 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 09:01:50.116873 ignition[790]: Ignition 2.19.0 Aug 13 09:01:50.116896 ignition[790]: Stage: disks Aug 13 09:01:50.117150 ignition[790]: no configs at "/usr/lib/ignition/base.d" Aug 13 09:01:50.119648 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 09:01:50.117179 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 09:01:50.121547 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 09:01:50.118280 ignition[790]: disks: disks passed Aug 13 09:01:50.122450 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 09:01:50.118366 ignition[790]: Ignition finished successfully Aug 13 09:01:50.123958 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 09:01:50.125466 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 09:01:50.126816 systemd[1]: Reached target basic.target - Basic System. Aug 13 09:01:50.138316 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 09:01:50.157172 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 09:01:50.161089 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 09:01:50.166168 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 09:01:50.283092 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 09:01:50.283899 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 09:01:50.285966 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 09:01:50.292198 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 09:01:50.297201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 09:01:50.298350 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 09:01:50.300345 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Aug 13 09:01:50.301271 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 09:01:50.301314 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 09:01:50.319145 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Aug 13 09:01:50.319194 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 09:01:50.319223 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 09:01:50.319243 kernel: BTRFS info (device vda6): using free space tree Aug 13 09:01:50.319262 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 09:01:50.323540 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 09:01:50.324425 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 09:01:50.335658 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 09:01:50.403742 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 09:01:50.412105 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Aug 13 09:01:50.418816 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 09:01:50.425635 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 09:01:50.528248 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 09:01:50.534233 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 09:01:50.548444 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 09:01:50.561118 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 09:01:50.582343 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 09:01:50.598083 ignition[923]: INFO : Ignition 2.19.0 Aug 13 09:01:50.598083 ignition[923]: INFO : Stage: mount Aug 13 09:01:50.599993 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 09:01:50.599993 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 09:01:50.599993 ignition[923]: INFO : mount: mount passed Aug 13 09:01:50.599993 ignition[923]: INFO : Ignition finished successfully Aug 13 09:01:50.600960 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 09:01:50.730351 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 09:01:51.601753 systemd-networkd[769]: eth0: Gained IPv6LL Aug 13 09:01:53.109861 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:84a6:24:19ff:fee6:129a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:84a6:24:19ff:fee6:129a/64 assigned by NDisc. Aug 13 09:01:53.109879 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 09:01:57.470702 coreos-metadata[808]: Aug 13 09:01:57.470 WARN failed to locate config-drive, using the metadata service API instead Aug 13 09:01:57.494366 coreos-metadata[808]: Aug 13 09:01:57.494 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 09:01:57.512704 coreos-metadata[808]: Aug 13 09:01:57.512 INFO Fetch successful Aug 13 09:01:57.514842 coreos-metadata[808]: Aug 13 09:01:57.512 INFO wrote hostname srv-cz57v.gb1.brightbox.com to /sysroot/etc/hostname Aug 13 09:01:57.515878 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Aug 13 09:01:57.516035 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Aug 13 09:01:57.524280 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 09:01:57.540441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 09:01:57.565136 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Aug 13 09:01:57.570129 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 09:01:57.570223 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 09:01:57.573742 kernel: BTRFS info (device vda6): using free space tree Aug 13 09:01:57.580250 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 09:01:57.582445 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 09:01:57.616255 ignition[958]: INFO : Ignition 2.19.0 Aug 13 09:01:57.617350 ignition[958]: INFO : Stage: files Aug 13 09:01:57.617989 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 09:01:57.617989 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 09:01:57.619777 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Aug 13 09:01:57.620685 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 09:01:57.620685 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 09:01:57.623617 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 09:01:57.624624 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 09:01:57.624624 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 09:01:57.624338 unknown[958]: wrote ssh authorized keys file for user: core Aug 13 09:01:57.627533 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 09:01:57.627533 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 09:01:57.871206 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 09:01:58.814976 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 09:01:58.814976 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 09:01:58.814976 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 09:01:58.814976 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 09:01:58.814976 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 09:01:58.814976 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 09:01:58.829628 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 09:01:59.181050 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 09:02:00.687292 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 09:02:00.689637 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 09:02:00.689637 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 09:02:00.693120 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 09:02:00.693120 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 09:02:00.693120 ignition[958]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 09:02:00.693120 ignition[958]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 09:02:00.693120 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 09:02:00.693120 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 09:02:00.693120 ignition[958]: INFO : files: files passed Aug 13 09:02:00.693120 ignition[958]: INFO : Ignition finished successfully Aug 13 09:02:00.695007 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 09:02:00.707456 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 09:02:00.716379 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 09:02:00.721683 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 09:02:00.721843 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 09:02:00.732850 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 09:02:00.732850 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 09:02:00.736407 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 09:02:00.737703 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 09:02:00.738826 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 09:02:00.746342 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 09:02:00.789090 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 09:02:00.789274 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 09:02:00.791458 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 09:02:00.792551 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 09:02:00.794588 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 09:02:00.801302 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 09:02:00.820117 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 09:02:00.827377 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 09:02:00.842577 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 09:02:00.843490 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 09:02:00.844920 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 09:02:00.846485 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 09:02:00.846660 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 09:02:00.849286 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 09:02:00.850256 systemd[1]: Stopped target basic.target - Basic System. Aug 13 09:02:00.851589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 09:02:00.853431 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 09:02:00.854843 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 09:02:00.856289 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 09:02:00.858578 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 09:02:00.859465 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 09:02:00.861064 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 09:02:00.862572 systemd[1]: Stopped target swap.target - Swaps. Aug 13 09:02:00.863945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 09:02:00.864175 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 09:02:00.866215 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 09:02:00.867161 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 09:02:00.868537 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 09:02:00.868706 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 09:02:00.870024 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 09:02:00.870266 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 09:02:00.872026 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 09:02:00.872258 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 09:02:00.874014 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 09:02:00.874219 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 09:02:00.883400 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 09:02:00.886319 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 09:02:00.886996 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 09:02:00.887219 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 09:02:00.890276 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 09:02:00.890438 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 09:02:00.903539 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 09:02:00.903689 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 09:02:00.918159 ignition[1010]: INFO : Ignition 2.19.0 Aug 13 09:02:00.918159 ignition[1010]: INFO : Stage: umount Aug 13 09:02:00.918159 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 09:02:00.918159 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 09:02:00.926621 ignition[1010]: INFO : umount: umount passed Aug 13 09:02:00.926621 ignition[1010]: INFO : Ignition finished successfully Aug 13 09:02:00.920525 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 09:02:00.920721 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 09:02:00.922344 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 09:02:00.922442 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 09:02:00.923799 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 09:02:00.923867 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 09:02:00.927376 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 09:02:00.927463 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 09:02:00.928764 systemd[1]: Stopped target network.target - Network. Aug 13 09:02:00.931213 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 09:02:00.931306 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 09:02:00.934254 systemd[1]: Stopped target paths.target - Path Units. Aug 13 09:02:00.937057 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 09:02:00.942001 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 09:02:00.943006 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 09:02:00.943637 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 09:02:00.945455 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 09:02:00.945531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 09:02:00.946680 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 09:02:00.946749 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 09:02:00.948211 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 09:02:00.948292 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 09:02:00.949831 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 09:02:00.949911 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 09:02:00.951557 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 09:02:00.955387 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 09:02:00.956396 systemd-networkd[769]: eth0: DHCPv6 lease lost Aug 13 09:02:00.959995 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 09:02:00.961960 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 09:02:00.962436 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 09:02:00.964051 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 09:02:00.964253 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 09:02:00.967439 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 09:02:00.967510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 09:02:00.968539 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 09:02:00.968619 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 09:02:00.978439 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 09:02:00.979875 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 09:02:00.979992 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 09:02:00.984341 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 09:02:00.987027 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 09:02:00.987242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 09:02:00.995697 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 09:02:00.995963 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 09:02:00.999580 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 09:02:00.999682 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 09:02:01.001407 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 09:02:01.001490 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 09:02:01.002900 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 09:02:01.002980 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 09:02:01.003912 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 09:02:01.003982 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 09:02:01.007410 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 09:02:01.007520 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 09:02:01.012783 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 09:02:01.014566 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 09:02:01.014673 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 09:02:01.015456 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 09:02:01.015524 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 09:02:01.021082 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 09:02:01.021198 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 09:02:01.022933 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 09:02:01.023010 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 09:02:01.026335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 09:02:01.026422 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 09:02:01.027641 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 09:02:01.027709 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 09:02:01.029481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 09:02:01.029553 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 09:02:01.031881 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 09:02:01.032059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 09:02:01.035264 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 09:02:01.035432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 09:02:01.037200 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 09:02:01.045482 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 09:02:01.057649 systemd[1]: Switching root. Aug 13 09:02:01.094574 systemd-journald[200]: Journal stopped Aug 13 09:02:02.531035 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Aug 13 09:02:02.535263 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 09:02:02.535300 kernel: SELinux: policy capability open_perms=1 Aug 13 09:02:02.535321 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 09:02:02.535371 kernel: SELinux: policy capability always_check_network=0 Aug 13 09:02:02.535393 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 09:02:02.535412 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 09:02:02.535438 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 09:02:02.535457 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 09:02:02.535484 systemd[1]: Successfully loaded SELinux policy in 49.571ms. Aug 13 09:02:02.535515 kernel: audit: type=1403 audit(1755075721.335:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 09:02:02.535538 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.636ms. Aug 13 09:02:02.535574 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 09:02:02.535598 systemd[1]: Detected virtualization kvm. Aug 13 09:02:02.535619 systemd[1]: Detected architecture x86-64. Aug 13 09:02:02.535638 systemd[1]: Detected first boot. Aug 13 09:02:02.535659 systemd[1]: Hostname set to . Aug 13 09:02:02.535679 systemd[1]: Initializing machine ID from VM UUID. Aug 13 09:02:02.535700 zram_generator::config[1053]: No configuration found. Aug 13 09:02:02.535721 systemd[1]: Populated /etc with preset unit settings. Aug 13 09:02:02.535753 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 09:02:02.535776 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 09:02:02.535803 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 09:02:02.535831 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 09:02:02.535853 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 09:02:02.535873 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 09:02:02.535901 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 09:02:02.535923 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 09:02:02.535945 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 09:02:02.535979 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 09:02:02.536001 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 09:02:02.536023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 09:02:02.536044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 09:02:02.536078 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 09:02:02.536129 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 09:02:02.536152 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 09:02:02.536173 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 09:02:02.536209 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 09:02:02.536231 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 09:02:02.536252 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 09:02:02.536273 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 09:02:02.536293 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 09:02:02.536313 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 09:02:02.536346 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 09:02:02.536369 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 09:02:02.536390 systemd[1]: Reached target slices.target - Slice Units. Aug 13 09:02:02.536410 systemd[1]: Reached target swap.target - Swaps. Aug 13 09:02:02.536432 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 09:02:02.536453 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 09:02:02.536493 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 09:02:02.536530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 09:02:02.536571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 09:02:02.536600 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 09:02:02.536623 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 09:02:02.536644 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 09:02:02.536665 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 09:02:02.536693 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:02.536714 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 09:02:02.536753 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 09:02:02.536776 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 09:02:02.536797 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 09:02:02.536818 systemd[1]: Reached target machines.target - Containers. Aug 13 09:02:02.536846 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 09:02:02.536873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 09:02:02.536901 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 09:02:02.536923 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 09:02:02.536943 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 09:02:02.536975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 09:02:02.536997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 09:02:02.537017 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 09:02:02.537038 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 09:02:02.537059 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 09:02:02.538204 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 09:02:02.538232 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 09:02:02.538254 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 09:02:02.538312 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 09:02:02.538337 kernel: fuse: init (API version 7.39) Aug 13 09:02:02.538358 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 09:02:02.538393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 09:02:02.538422 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 09:02:02.538444 kernel: loop: module loaded Aug 13 09:02:02.538471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 09:02:02.538492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 09:02:02.538513 kernel: ACPI: bus type drm_connector registered Aug 13 09:02:02.538533 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 09:02:02.538567 systemd[1]: Stopped verity-setup.service. Aug 13 09:02:02.538590 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:02.538654 systemd-journald[1146]: Collecting audit messages is disabled. Aug 13 09:02:02.538708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 09:02:02.538733 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 09:02:02.538754 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 09:02:02.538789 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 09:02:02.538811 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 09:02:02.538833 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 09:02:02.538853 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 09:02:02.538874 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 09:02:02.538894 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 09:02:02.538927 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 09:02:02.538949 systemd-journald[1146]: Journal started Aug 13 09:02:02.538995 systemd-journald[1146]: Runtime Journal (/run/log/journal/8b019718f13c4fb0932d3af1d02f367f) is 4.7M, max 38.0M, 33.2M free. Aug 13 09:02:02.540168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 09:02:02.120589 systemd[1]: Queued start job for default target multi-user.target. Aug 13 09:02:02.142215 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 09:02:02.142848 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 09:02:02.542171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 09:02:02.546129 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 09:02:02.547620 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 09:02:02.547888 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 09:02:02.549029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 09:02:02.549281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 09:02:02.550464 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 09:02:02.550672 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 09:02:02.551988 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 09:02:02.552284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 09:02:02.553452 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 09:02:02.554559 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 09:02:02.555873 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 09:02:02.572551 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 09:02:02.582620 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 09:02:02.598338 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 09:02:02.599188 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 09:02:02.599247 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 09:02:02.603399 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 09:02:02.614831 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 09:02:02.619562 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 09:02:02.620526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 09:02:02.630318 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 09:02:02.636192 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 09:02:02.637201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 09:02:02.645699 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 09:02:02.647228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 09:02:02.656349 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 09:02:02.660229 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 09:02:02.670497 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 09:02:02.674719 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 09:02:02.688277 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 09:02:02.690497 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 09:02:02.728694 systemd-journald[1146]: Time spent on flushing to /var/log/journal/8b019718f13c4fb0932d3af1d02f367f is 82.063ms for 1140 entries. Aug 13 09:02:02.728694 systemd-journald[1146]: System Journal (/var/log/journal/8b019718f13c4fb0932d3af1d02f367f) is 8.0M, max 584.8M, 576.8M free. Aug 13 09:02:02.852942 systemd-journald[1146]: Received client request to flush runtime journal. Aug 13 09:02:02.853025 kernel: loop0: detected capacity change from 0 to 8 Aug 13 09:02:02.853080 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 09:02:02.853140 kernel: loop1: detected capacity change from 0 to 142488 Aug 13 09:02:02.742913 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 09:02:02.746499 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 09:02:02.756318 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 09:02:02.777466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 09:02:02.849708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 09:02:02.864478 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 09:02:02.867710 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 09:02:02.869593 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 09:02:02.869614 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Aug 13 09:02:02.878163 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 09:02:02.882191 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 09:02:02.898364 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 09:02:02.904485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 09:02:02.910394 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 09:02:02.931130 kernel: loop2: detected capacity change from 0 to 140768 Aug 13 09:02:02.977119 kernel: loop3: detected capacity change from 0 to 224512 Aug 13 09:02:02.986438 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 09:02:02.998126 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 09:02:03.037663 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 13 09:02:03.037692 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Aug 13 09:02:03.043933 kernel: loop4: detected capacity change from 0 to 8 Aug 13 09:02:03.059878 kernel: loop5: detected capacity change from 0 to 142488 Aug 13 09:02:03.058336 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 09:02:03.090290 kernel: loop6: detected capacity change from 0 to 140768 Aug 13 09:02:03.125201 kernel: loop7: detected capacity change from 0 to 224512 Aug 13 09:02:03.166875 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Aug 13 09:02:03.167800 (sd-merge)[1214]: Merged extensions into '/usr'. Aug 13 09:02:03.173791 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 09:02:03.173981 systemd[1]: Reloading... Aug 13 09:02:03.325135 zram_generator::config[1244]: No configuration found. Aug 13 09:02:03.524300 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 09:02:03.623419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 09:02:03.692058 systemd[1]: Reloading finished in 517 ms. Aug 13 09:02:03.723964 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 09:02:03.727613 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 09:02:03.742628 systemd[1]: Starting ensure-sysext.service... Aug 13 09:02:03.748488 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 09:02:03.787739 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 09:02:03.788568 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 09:02:03.790009 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 09:02:03.791350 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Aug 13 09:02:03.791471 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Aug 13 09:02:03.791509 systemd[1]: Reloading... Aug 13 09:02:03.791748 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Aug 13 09:02:03.798168 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 09:02:03.799349 systemd-tmpfiles[1298]: Skipping /boot Aug 13 09:02:03.819988 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 09:02:03.820221 systemd-tmpfiles[1298]: Skipping /boot Aug 13 09:02:03.906121 zram_generator::config[1331]: No configuration found. Aug 13 09:02:04.078333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 09:02:04.145277 systemd[1]: Reloading finished in 353 ms. Aug 13 09:02:04.167763 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 09:02:04.173742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 09:02:04.191385 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 09:02:04.196303 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 09:02:04.207304 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 09:02:04.219311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 09:02:04.229805 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 09:02:04.241290 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 09:02:04.259019 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 09:02:04.261255 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 09:02:04.268469 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:04.269176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 09:02:04.274493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 09:02:04.279396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 09:02:04.284422 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 09:02:04.286346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 09:02:04.296522 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 09:02:04.298086 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:04.310502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:04.310879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 09:02:04.312243 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 09:02:04.312456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:04.322049 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:04.322426 systemd-udevd[1394]: Using default interface naming scheme 'v255'. Aug 13 09:02:04.323354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 09:02:04.333487 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 09:02:04.335036 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 09:02:04.335900 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 09:02:04.337826 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 09:02:04.349577 systemd[1]: Finished ensure-sysext.service. Aug 13 09:02:04.360284 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 09:02:04.384176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 09:02:04.384666 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 09:02:04.387671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 09:02:04.389180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 09:02:04.392452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 09:02:04.395162 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 09:02:04.404548 augenrules[1416]: No rules Aug 13 09:02:04.409287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 09:02:04.411207 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 09:02:04.411261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 09:02:04.411844 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 09:02:04.419562 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 09:02:04.419823 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 09:02:04.425182 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 09:02:04.426333 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 09:02:04.429062 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 09:02:04.436491 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 09:02:04.437422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 09:02:04.549320 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 09:02:04.613122 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1419) Aug 13 09:02:04.648190 systemd-resolved[1393]: Positive Trust Anchors: Aug 13 09:02:04.648710 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 09:02:04.648761 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 09:02:04.658870 systemd-resolved[1393]: Using system hostname 'srv-cz57v.gb1.brightbox.com'. Aug 13 09:02:04.661713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 09:02:04.663015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 09:02:04.670608 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 09:02:04.671323 systemd-networkd[1428]: lo: Link UP Aug 13 09:02:04.671335 systemd-networkd[1428]: lo: Gained carrier Aug 13 09:02:04.671709 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 09:02:04.676969 systemd-timesyncd[1412]: No network connectivity, watching for changes. Aug 13 09:02:04.679144 systemd-networkd[1428]: Enumeration completed Aug 13 09:02:04.680334 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 09:02:04.680342 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 09:02:04.680570 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 09:02:04.685386 systemd[1]: Reached target network.target - Network. Aug 13 09:02:04.690271 systemd-networkd[1428]: eth0: Link UP Aug 13 09:02:04.690392 systemd-networkd[1428]: eth0: Gained carrier Aug 13 09:02:04.690506 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 09:02:04.693301 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 09:02:04.716949 systemd-networkd[1428]: eth0: DHCPv4 address 10.230.18.154/30, gateway 10.230.18.153 acquired from 10.230.18.153 Aug 13 09:02:04.720846 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Aug 13 09:02:04.757024 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 09:02:04.768214 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 09:02:04.778097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 09:02:04.789102 kernel: ACPI: button: Power Button [PWRF] Aug 13 09:02:04.799775 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 09:02:04.818098 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 09:02:04.865124 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 09:02:04.874814 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 09:02:04.875320 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 09:02:04.875629 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 09:02:04.979635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 09:02:05.134714 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 09:02:05.145554 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 09:02:05.182592 systemd-timesyncd[1412]: Contacted time server 85.199.214.98:123 (1.flatcar.pool.ntp.org). Aug 13 09:02:05.182703 systemd-timesyncd[1412]: Initial clock synchronization to Wed 2025-08-13 09:02:05.342710 UTC. Aug 13 09:02:05.210756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 09:02:05.224560 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 09:02:05.262779 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 09:02:05.264557 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 09:02:05.265393 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 09:02:05.266320 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 09:02:05.267308 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 09:02:05.268437 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 09:02:05.269386 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 09:02:05.270184 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 09:02:05.270940 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 09:02:05.270992 systemd[1]: Reached target paths.target - Path Units. Aug 13 09:02:05.271641 systemd[1]: Reached target timers.target - Timer Units. Aug 13 09:02:05.273977 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 09:02:05.276617 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 09:02:05.283112 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 09:02:05.285675 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 09:02:05.287238 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 09:02:05.288100 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 09:02:05.288762 systemd[1]: Reached target basic.target - Basic System. Aug 13 09:02:05.289498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 09:02:05.289547 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 09:02:05.298228 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 09:02:05.301839 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 09:02:05.306224 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 09:02:05.315979 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 09:02:05.320225 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 09:02:05.325287 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 09:02:05.327178 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 09:02:05.333300 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 09:02:05.346211 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 09:02:05.354287 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 09:02:05.362626 jq[1478]: false Aug 13 09:02:05.369380 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 09:02:05.376472 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 09:02:05.379121 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 09:02:05.379822 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 09:02:05.387407 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 09:02:05.392223 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 09:02:05.394720 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 09:02:05.402766 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 09:02:05.403125 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 09:02:05.405368 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 09:02:05.405614 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 09:02:05.419109 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 09:02:05.419382 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 09:02:05.428704 dbus-daemon[1477]: [system] SELinux support is enabled Aug 13 09:02:05.428943 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 09:02:05.435254 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 09:02:05.443803 update_engine[1493]: I20250813 09:02:05.440281 1493 main.cc:92] Flatcar Update Engine starting Aug 13 09:02:05.435302 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 09:02:05.446341 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 09:02:05.446372 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 09:02:05.453781 jq[1494]: true Aug 13 09:02:05.466137 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1428 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 09:02:05.477654 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 09:02:05.483559 systemd[1]: Started update-engine.service - Update Engine. Aug 13 09:02:05.495326 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 09:02:05.496605 extend-filesystems[1479]: Found loop4 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found loop5 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found loop6 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found loop7 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda1 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda2 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda3 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found usr Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda4 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda6 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda7 Aug 13 09:02:05.499192 extend-filesystems[1479]: Found vda9 Aug 13 09:02:05.499192 extend-filesystems[1479]: Checking size of /dev/vda9 Aug 13 09:02:05.572794 tar[1498]: linux-amd64/LICENSE Aug 13 09:02:05.572794 tar[1498]: linux-amd64/helm Aug 13 09:02:05.576714 jq[1507]: true Aug 13 09:02:05.500548 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 09:02:05.582715 extend-filesystems[1479]: Resized partition /dev/vda9 Aug 13 09:02:05.602249 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1446) Aug 13 09:02:05.602359 update_engine[1493]: I20250813 09:02:05.499351 1493 update_check_scheduler.cc:74] Next update check in 2m27s Aug 13 09:02:05.527630 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 09:02:05.602788 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Aug 13 09:02:05.622992 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Aug 13 09:02:05.714999 systemd-logind[1492]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 09:02:05.715056 systemd-logind[1492]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 09:02:05.720534 systemd-logind[1492]: New seat seat0. Aug 13 09:02:05.721956 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 09:02:05.805792 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Aug 13 09:02:05.809014 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 09:02:05.827484 systemd[1]: Starting sshkeys.service... Aug 13 09:02:05.833362 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 09:02:05.834520 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 09:02:05.838913 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1511 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 09:02:05.850427 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 09:02:05.872106 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 09:02:05.895020 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 09:02:05.903533 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 09:02:05.919140 polkitd[1549]: Started polkitd version 121 Aug 13 09:02:05.941936 polkitd[1549]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 09:02:05.942035 polkitd[1549]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 09:02:05.951564 polkitd[1549]: Finished loading, compiling and executing 2 rules Aug 13 09:02:05.958316 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 09:02:05.958685 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 09:02:05.961744 polkitd[1549]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 09:02:06.005607 systemd-hostnamed[1511]: Hostname set to (static) Aug 13 09:02:06.015113 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 09:02:06.041790 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 09:02:06.041790 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 09:02:06.041790 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 09:02:06.053349 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Aug 13 09:02:06.048361 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 09:02:06.048680 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 09:02:06.065518 systemd-networkd[1428]: eth0: Gained IPv6LL Aug 13 09:02:06.071268 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 09:02:06.074994 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 09:02:06.087527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:02:06.098798 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 09:02:06.184768 containerd[1509]: time="2025-08-13T09:02:06.184589923Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 09:02:06.228604 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 09:02:06.292696 containerd[1509]: time="2025-08-13T09:02:06.291793942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 09:02:06.301154 containerd[1509]: time="2025-08-13T09:02:06.301030016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 09:02:06.301154 containerd[1509]: time="2025-08-13T09:02:06.301124485Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 09:02:06.301154 containerd[1509]: time="2025-08-13T09:02:06.301156322Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301438419Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301466821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301606505Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301630356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301868085Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301895381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301915997Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 09:02:06.302257 containerd[1509]: time="2025-08-13T09:02:06.301933199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 09:02:06.303292 containerd[1509]: time="2025-08-13T09:02:06.302075974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 09:02:06.304908 containerd[1509]: time="2025-08-13T09:02:06.303696977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 09:02:06.304908 containerd[1509]: time="2025-08-13T09:02:06.303857243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 09:02:06.304908 containerd[1509]: time="2025-08-13T09:02:06.303883229Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 09:02:06.304908 containerd[1509]: time="2025-08-13T09:02:06.304044208Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 09:02:06.307130 containerd[1509]: time="2025-08-13T09:02:06.305713979Z" level=info msg="metadata content store policy set" policy=shared Aug 13 09:02:06.311704 containerd[1509]: time="2025-08-13T09:02:06.311652722Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 09:02:06.311845 containerd[1509]: time="2025-08-13T09:02:06.311753590Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 09:02:06.311845 containerd[1509]: time="2025-08-13T09:02:06.311785500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 09:02:06.311845 containerd[1509]: time="2025-08-13T09:02:06.311811271Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 09:02:06.311964 containerd[1509]: time="2025-08-13T09:02:06.311874146Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 09:02:06.316107 containerd[1509]: time="2025-08-13T09:02:06.314748674Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321555669Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321853539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321882569Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321903668Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321926198Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321946753Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321966172Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.321989346Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.322018743Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.322041122Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.322060853Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.322079662Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.322146982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323110 containerd[1509]: time="2025-08-13T09:02:06.322171972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322191474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322213079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322234953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322256741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322276018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322296293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322317229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322354937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322402831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322426208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322451082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322495620Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322543936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322579836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.323683 containerd[1509]: time="2025-08-13T09:02:06.322600959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322711430Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322749890Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322782624Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322804302Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322821638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322850178Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322875820Z" level=info msg="NRI interface is disabled by configuration." Aug 13 09:02:06.324217 containerd[1509]: time="2025-08-13T09:02:06.322898952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 09:02:06.329861 containerd[1509]: time="2025-08-13T09:02:06.328451248Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 09:02:06.329861 containerd[1509]: time="2025-08-13T09:02:06.328585781Z" level=info msg="Connect containerd service" Aug 13 09:02:06.329861 containerd[1509]: time="2025-08-13T09:02:06.328663196Z" level=info msg="using legacy CRI server" Aug 13 09:02:06.329861 containerd[1509]: time="2025-08-13T09:02:06.328681696Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 09:02:06.329861 containerd[1509]: time="2025-08-13T09:02:06.328961885Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 09:02:06.332501 containerd[1509]: time="2025-08-13T09:02:06.332460596Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 09:02:06.335130 containerd[1509]: time="2025-08-13T09:02:06.332820108Z" level=info msg="Start subscribing containerd event" Aug 13 09:02:06.335130 containerd[1509]: time="2025-08-13T09:02:06.332936753Z" level=info msg="Start recovering state" Aug 13 09:02:06.335130 containerd[1509]: time="2025-08-13T09:02:06.333063359Z" level=info msg="Start event monitor" Aug 13 09:02:06.335130 containerd[1509]: time="2025-08-13T09:02:06.333122074Z" level=info msg="Start snapshots syncer" Aug 13 09:02:06.335130 containerd[1509]: time="2025-08-13T09:02:06.333148270Z" level=info msg="Start cni network conf syncer for default" Aug 13 09:02:06.335130 containerd[1509]: time="2025-08-13T09:02:06.333688570Z" level=info msg="Start streaming server" Aug 13 09:02:06.335755 containerd[1509]: time="2025-08-13T09:02:06.335561236Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 09:02:06.335755 containerd[1509]: time="2025-08-13T09:02:06.335677341Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 09:02:06.336118 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 09:02:06.339052 containerd[1509]: time="2025-08-13T09:02:06.338520319Z" level=info msg="containerd successfully booted in 0.159917s" Aug 13 09:02:06.443938 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 09:02:06.485418 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 09:02:06.497640 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 09:02:06.501324 systemd[1]: Started sshd@0-10.230.18.154:22-139.178.68.195:50966.service - OpenSSH per-connection server daemon (139.178.68.195:50966). Aug 13 09:02:06.539888 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 09:02:06.540240 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 09:02:06.550661 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 09:02:06.574707 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 09:02:06.584758 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 09:02:06.597685 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 09:02:06.598865 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 09:02:06.758217 tar[1498]: linux-amd64/README.md Aug 13 09:02:06.775203 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 09:02:07.387353 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:07.406928 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 09:02:07.431686 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 50966 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:07.438259 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:07.457500 systemd-logind[1492]: New session 1 of user core. Aug 13 09:02:07.460989 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 09:02:07.470169 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 09:02:07.493982 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 09:02:07.504847 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 09:02:07.521495 (systemd)[1607]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 09:02:07.580327 systemd-networkd[1428]: eth0: Ignoring DHCPv6 address 2a02:1348:179:84a6:24:19ff:fee6:129a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:84a6:24:19ff:fee6:129a/64 assigned by NDisc. Aug 13 09:02:07.580341 systemd-networkd[1428]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 09:02:07.686067 systemd[1607]: Queued start job for default target default.target. Aug 13 09:02:07.690969 systemd[1607]: Created slice app.slice - User Application Slice. Aug 13 09:02:07.691015 systemd[1607]: Reached target paths.target - Paths. Aug 13 09:02:07.691039 systemd[1607]: Reached target timers.target - Timers. Aug 13 09:02:07.695247 systemd[1607]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 09:02:07.711049 systemd[1607]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 09:02:07.711981 systemd[1607]: Reached target sockets.target - Sockets. Aug 13 09:02:07.712009 systemd[1607]: Reached target basic.target - Basic System. Aug 13 09:02:07.712106 systemd[1607]: Reached target default.target - Main User Target. Aug 13 09:02:07.712177 systemd[1607]: Startup finished in 178ms. Aug 13 09:02:07.713001 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 09:02:07.728497 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 09:02:08.089837 kubelet[1604]: E0813 09:02:08.089671 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 09:02:08.092919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 09:02:08.093542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 09:02:08.094346 systemd[1]: kubelet.service: Consumed 1.083s CPU time. Aug 13 09:02:08.388632 systemd[1]: Started sshd@1-10.230.18.154:22-139.178.68.195:50976.service - OpenSSH per-connection server daemon (139.178.68.195:50976). Aug 13 09:02:09.295531 sshd[1625]: Accepted publickey for core from 139.178.68.195 port 50976 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:09.298703 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:09.306989 systemd-logind[1492]: New session 2 of user core. Aug 13 09:02:09.318484 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 09:02:09.928324 sshd[1625]: pam_unix(sshd:session): session closed for user core Aug 13 09:02:09.932341 systemd[1]: sshd@1-10.230.18.154:22-139.178.68.195:50976.service: Deactivated successfully. Aug 13 09:02:09.934625 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 09:02:09.937011 systemd-logind[1492]: Session 2 logged out. Waiting for processes to exit. Aug 13 09:02:09.938474 systemd-logind[1492]: Removed session 2. Aug 13 09:02:10.092663 systemd[1]: Started sshd@2-10.230.18.154:22-139.178.68.195:47366.service - OpenSSH per-connection server daemon (139.178.68.195:47366). Aug 13 09:02:11.007508 sshd[1634]: Accepted publickey for core from 139.178.68.195 port 47366 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:11.009861 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:11.016788 systemd-logind[1492]: New session 3 of user core. Aug 13 09:02:11.025462 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 09:02:11.638722 sshd[1634]: pam_unix(sshd:session): session closed for user core Aug 13 09:02:11.645260 systemd-logind[1492]: Session 3 logged out. Waiting for processes to exit. Aug 13 09:02:11.646152 systemd[1]: sshd@2-10.230.18.154:22-139.178.68.195:47366.service: Deactivated successfully. Aug 13 09:02:11.649471 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 09:02:11.651171 systemd-logind[1492]: Removed session 3. Aug 13 09:02:11.717430 login[1593]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 09:02:11.718864 login[1594]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 09:02:11.725717 systemd-logind[1492]: New session 4 of user core. Aug 13 09:02:11.737479 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 09:02:11.741471 systemd-logind[1492]: New session 5 of user core. Aug 13 09:02:11.751618 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 09:02:12.447729 coreos-metadata[1476]: Aug 13 09:02:12.447 WARN failed to locate config-drive, using the metadata service API instead Aug 13 09:02:12.484738 coreos-metadata[1476]: Aug 13 09:02:12.484 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Aug 13 09:02:12.492733 coreos-metadata[1476]: Aug 13 09:02:12.492 INFO Fetch failed with 404: resource not found Aug 13 09:02:12.492733 coreos-metadata[1476]: Aug 13 09:02:12.492 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 09:02:12.493580 coreos-metadata[1476]: Aug 13 09:02:12.493 INFO Fetch successful Aug 13 09:02:12.493703 coreos-metadata[1476]: Aug 13 09:02:12.493 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Aug 13 09:02:12.514778 coreos-metadata[1476]: Aug 13 09:02:12.514 INFO Fetch successful Aug 13 09:02:12.515024 coreos-metadata[1476]: Aug 13 09:02:12.514 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Aug 13 09:02:12.531682 coreos-metadata[1476]: Aug 13 09:02:12.531 INFO Fetch successful Aug 13 09:02:12.531682 coreos-metadata[1476]: Aug 13 09:02:12.531 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Aug 13 09:02:12.547154 coreos-metadata[1476]: Aug 13 09:02:12.547 INFO Fetch successful Aug 13 09:02:12.547278 coreos-metadata[1476]: Aug 13 09:02:12.547 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Aug 13 09:02:12.569236 coreos-metadata[1476]: Aug 13 09:02:12.569 INFO Fetch successful Aug 13 09:02:12.599910 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 09:02:12.601851 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 09:02:13.042258 coreos-metadata[1550]: Aug 13 09:02:13.042 WARN failed to locate config-drive, using the metadata service API instead Aug 13 09:02:13.065124 coreos-metadata[1550]: Aug 13 09:02:13.065 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Aug 13 09:02:13.092816 coreos-metadata[1550]: Aug 13 09:02:13.092 INFO Fetch successful Aug 13 09:02:13.092816 coreos-metadata[1550]: Aug 13 09:02:13.092 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 09:02:13.136334 coreos-metadata[1550]: Aug 13 09:02:13.136 INFO Fetch successful Aug 13 09:02:13.139469 unknown[1550]: wrote ssh authorized keys file for user: core Aug 13 09:02:13.168244 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Aug 13 09:02:13.168810 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 09:02:13.172123 systemd[1]: Finished sshkeys.service. Aug 13 09:02:13.173400 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 09:02:13.179188 systemd[1]: Startup finished in 1.338s (kernel) + 14.584s (initrd) + 11.892s (userspace) = 27.816s. Aug 13 09:02:18.343969 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 09:02:18.361508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:02:18.524292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:18.531058 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 09:02:18.645120 kubelet[1687]: E0813 09:02:18.644897 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 09:02:18.648769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 09:02:18.649049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 09:02:21.860603 systemd[1]: Started sshd@3-10.230.18.154:22-139.178.68.195:38934.service - OpenSSH per-connection server daemon (139.178.68.195:38934). Aug 13 09:02:22.756892 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 38934 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:22.758950 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:22.765132 systemd-logind[1492]: New session 6 of user core. Aug 13 09:02:22.771300 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 09:02:23.382956 sshd[1695]: pam_unix(sshd:session): session closed for user core Aug 13 09:02:23.387513 systemd[1]: sshd@3-10.230.18.154:22-139.178.68.195:38934.service: Deactivated successfully. Aug 13 09:02:23.389661 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 09:02:23.390525 systemd-logind[1492]: Session 6 logged out. Waiting for processes to exit. Aug 13 09:02:23.392892 systemd-logind[1492]: Removed session 6. Aug 13 09:02:23.555812 systemd[1]: Started sshd@4-10.230.18.154:22-139.178.68.195:38938.service - OpenSSH per-connection server daemon (139.178.68.195:38938). Aug 13 09:02:24.510901 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 38938 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:24.512912 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:24.520359 systemd-logind[1492]: New session 7 of user core. Aug 13 09:02:24.531282 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 09:02:25.167330 sshd[1702]: pam_unix(sshd:session): session closed for user core Aug 13 09:02:25.171935 systemd-logind[1492]: Session 7 logged out. Waiting for processes to exit. Aug 13 09:02:25.173595 systemd[1]: sshd@4-10.230.18.154:22-139.178.68.195:38938.service: Deactivated successfully. Aug 13 09:02:25.175828 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 09:02:25.177984 systemd-logind[1492]: Removed session 7. Aug 13 09:02:25.326689 systemd[1]: Started sshd@5-10.230.18.154:22-139.178.68.195:38946.service - OpenSSH per-connection server daemon (139.178.68.195:38946). Aug 13 09:02:26.217170 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 38946 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:26.219217 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:26.227918 systemd-logind[1492]: New session 8 of user core. Aug 13 09:02:26.234496 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 09:02:26.843452 sshd[1709]: pam_unix(sshd:session): session closed for user core Aug 13 09:02:26.847251 systemd[1]: sshd@5-10.230.18.154:22-139.178.68.195:38946.service: Deactivated successfully. Aug 13 09:02:26.849362 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 09:02:26.851293 systemd-logind[1492]: Session 8 logged out. Waiting for processes to exit. Aug 13 09:02:26.852672 systemd-logind[1492]: Removed session 8. Aug 13 09:02:26.998032 systemd[1]: Started sshd@6-10.230.18.154:22-139.178.68.195:38960.service - OpenSSH per-connection server daemon (139.178.68.195:38960). Aug 13 09:02:27.909947 sshd[1716]: Accepted publickey for core from 139.178.68.195 port 38960 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:27.912438 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:27.919252 systemd-logind[1492]: New session 9 of user core. Aug 13 09:02:27.927412 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 09:02:28.407539 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 09:02:28.408176 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 09:02:28.426641 sudo[1719]: pam_unix(sudo:session): session closed for user root Aug 13 09:02:28.572369 sshd[1716]: pam_unix(sshd:session): session closed for user core Aug 13 09:02:28.576358 systemd[1]: sshd@6-10.230.18.154:22-139.178.68.195:38960.service: Deactivated successfully. Aug 13 09:02:28.578564 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 09:02:28.580828 systemd-logind[1492]: Session 9 logged out. Waiting for processes to exit. Aug 13 09:02:28.582173 systemd-logind[1492]: Removed session 9. Aug 13 09:02:28.724699 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 09:02:28.733410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:02:28.736861 systemd[1]: Started sshd@7-10.230.18.154:22-139.178.68.195:38974.service - OpenSSH per-connection server daemon (139.178.68.195:38974). Aug 13 09:02:28.875761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:28.892621 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 09:02:28.996345 kubelet[1734]: E0813 09:02:28.996155 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 09:02:28.999379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 09:02:28.999635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 09:02:29.628723 sshd[1725]: Accepted publickey for core from 139.178.68.195 port 38974 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:29.630775 sshd[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:29.637138 systemd-logind[1492]: New session 10 of user core. Aug 13 09:02:29.653562 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 09:02:30.109471 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 09:02:30.109955 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 09:02:30.114913 sudo[1744]: pam_unix(sudo:session): session closed for user root Aug 13 09:02:30.122612 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 09:02:30.123049 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 09:02:30.144496 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 09:02:30.146514 auditctl[1747]: No rules Aug 13 09:02:30.147731 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 09:02:30.148018 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 09:02:30.151423 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 09:02:30.191142 augenrules[1765]: No rules Aug 13 09:02:30.191960 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 09:02:30.193124 sudo[1743]: pam_unix(sudo:session): session closed for user root Aug 13 09:02:30.338620 sshd[1725]: pam_unix(sshd:session): session closed for user core Aug 13 09:02:30.342969 systemd[1]: sshd@7-10.230.18.154:22-139.178.68.195:38974.service: Deactivated successfully. Aug 13 09:02:30.345458 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 09:02:30.347679 systemd-logind[1492]: Session 10 logged out. Waiting for processes to exit. Aug 13 09:02:30.348974 systemd-logind[1492]: Removed session 10. Aug 13 09:02:30.501492 systemd[1]: Started sshd@8-10.230.18.154:22-139.178.68.195:38202.service - OpenSSH per-connection server daemon (139.178.68.195:38202). Aug 13 09:02:31.390883 sshd[1773]: Accepted publickey for core from 139.178.68.195 port 38202 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:02:31.392834 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:02:31.399161 systemd-logind[1492]: New session 11 of user core. Aug 13 09:02:31.406335 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 09:02:31.871069 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 09:02:31.871593 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 09:02:32.335476 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 09:02:32.335653 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 09:02:32.764219 dockerd[1791]: time="2025-08-13T09:02:32.763468041Z" level=info msg="Starting up" Aug 13 09:02:32.900628 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3589856822-merged.mount: Deactivated successfully. Aug 13 09:02:32.911924 systemd[1]: var-lib-docker-metacopy\x2dcheck2834825658-merged.mount: Deactivated successfully. Aug 13 09:02:32.934086 dockerd[1791]: time="2025-08-13T09:02:32.933389364Z" level=info msg="Loading containers: start." Aug 13 09:02:33.077760 kernel: Initializing XFRM netlink socket Aug 13 09:02:33.192229 systemd-networkd[1428]: docker0: Link UP Aug 13 09:02:33.224591 dockerd[1791]: time="2025-08-13T09:02:33.224397393Z" level=info msg="Loading containers: done." Aug 13 09:02:33.244806 dockerd[1791]: time="2025-08-13T09:02:33.244738074Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 09:02:33.245027 dockerd[1791]: time="2025-08-13T09:02:33.244925984Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 09:02:33.245225 dockerd[1791]: time="2025-08-13T09:02:33.245182449Z" level=info msg="Daemon has completed initialization" Aug 13 09:02:33.288014 dockerd[1791]: time="2025-08-13T09:02:33.286996650Z" level=info msg="API listen on /run/docker.sock" Aug 13 09:02:33.287161 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 09:02:34.116013 containerd[1509]: time="2025-08-13T09:02:34.115891643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Aug 13 09:02:34.915271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932546554.mount: Deactivated successfully. Aug 13 09:02:36.954705 containerd[1509]: time="2025-08-13T09:02:36.954507598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:36.956324 containerd[1509]: time="2025-08-13T09:02:36.956270492Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" Aug 13 09:02:36.957157 containerd[1509]: time="2025-08-13T09:02:36.957096730Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:36.961275 containerd[1509]: time="2025-08-13T09:02:36.961195957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:36.963244 containerd[1509]: time="2025-08-13T09:02:36.962832663Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.846822192s" Aug 13 09:02:36.963244 containerd[1509]: time="2025-08-13T09:02:36.962905030Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Aug 13 09:02:36.964983 containerd[1509]: time="2025-08-13T09:02:36.964948616Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Aug 13 09:02:37.611169 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 09:02:39.085395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 09:02:39.098197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:02:39.322330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:39.329614 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 09:02:39.381242 containerd[1509]: time="2025-08-13T09:02:39.379535908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:39.384451 containerd[1509]: time="2025-08-13T09:02:39.384167530Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" Aug 13 09:02:39.388105 containerd[1509]: time="2025-08-13T09:02:39.385909066Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:39.394503 containerd[1509]: time="2025-08-13T09:02:39.394433637Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.429308317s" Aug 13 09:02:39.394914 containerd[1509]: time="2025-08-13T09:02:39.394846039Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Aug 13 09:02:39.395444 containerd[1509]: time="2025-08-13T09:02:39.394755862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:39.397648 containerd[1509]: time="2025-08-13T09:02:39.397613096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Aug 13 09:02:39.402759 kubelet[2007]: E0813 09:02:39.402679 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 09:02:39.406273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 09:02:39.406714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 09:02:41.773340 containerd[1509]: time="2025-08-13T09:02:41.773209263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:41.775436 containerd[1509]: time="2025-08-13T09:02:41.775366582Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" Aug 13 09:02:41.776565 containerd[1509]: time="2025-08-13T09:02:41.776505354Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:41.780455 containerd[1509]: time="2025-08-13T09:02:41.780417459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:41.782661 containerd[1509]: time="2025-08-13T09:02:41.782311720Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.384495146s" Aug 13 09:02:41.782661 containerd[1509]: time="2025-08-13T09:02:41.782356941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Aug 13 09:02:41.783582 containerd[1509]: time="2025-08-13T09:02:41.783547440Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Aug 13 09:02:43.934239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658933769.mount: Deactivated successfully. Aug 13 09:02:44.680735 containerd[1509]: time="2025-08-13T09:02:44.680579858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:44.682708 containerd[1509]: time="2025-08-13T09:02:44.682638280Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" Aug 13 09:02:44.684146 containerd[1509]: time="2025-08-13T09:02:44.684041505Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:44.689185 containerd[1509]: time="2025-08-13T09:02:44.689055812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:44.690781 containerd[1509]: time="2025-08-13T09:02:44.690191414Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.906590854s" Aug 13 09:02:44.690781 containerd[1509]: time="2025-08-13T09:02:44.690284961Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Aug 13 09:02:44.692150 containerd[1509]: time="2025-08-13T09:02:44.692118591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 09:02:45.623846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290266409.mount: Deactivated successfully. Aug 13 09:02:46.931948 containerd[1509]: time="2025-08-13T09:02:46.931755031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:46.933464 containerd[1509]: time="2025-08-13T09:02:46.933353479Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Aug 13 09:02:46.934308 containerd[1509]: time="2025-08-13T09:02:46.934233795Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:46.938284 containerd[1509]: time="2025-08-13T09:02:46.938225807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:46.940169 containerd[1509]: time="2025-08-13T09:02:46.939926286Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.247760847s" Aug 13 09:02:46.940169 containerd[1509]: time="2025-08-13T09:02:46.939974920Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 09:02:46.941859 containerd[1509]: time="2025-08-13T09:02:46.941050582Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 09:02:48.207564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297628324.mount: Deactivated successfully. Aug 13 09:02:48.213871 containerd[1509]: time="2025-08-13T09:02:48.213799770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:48.215788 containerd[1509]: time="2025-08-13T09:02:48.215721946Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 13 09:02:48.216698 containerd[1509]: time="2025-08-13T09:02:48.216614319Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:48.219796 containerd[1509]: time="2025-08-13T09:02:48.219736270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:48.221198 containerd[1509]: time="2025-08-13T09:02:48.220996708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.279889653s" Aug 13 09:02:48.221198 containerd[1509]: time="2025-08-13T09:02:48.221040650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 09:02:48.222034 containerd[1509]: time="2025-08-13T09:02:48.221851732Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 09:02:49.584231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 09:02:49.591499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:02:49.846453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:49.856624 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 09:02:49.918984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount892581386.mount: Deactivated successfully. Aug 13 09:02:49.953054 kubelet[2089]: E0813 09:02:49.952956 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 09:02:49.955556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 09:02:49.955831 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 09:02:50.376908 update_engine[1493]: I20250813 09:02:50.376699 1493 update_attempter.cc:509] Updating boot flags... Aug 13 09:02:50.509291 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2144) Aug 13 09:02:50.575102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2147) Aug 13 09:02:52.937950 containerd[1509]: time="2025-08-13T09:02:52.937753777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:52.939765 containerd[1509]: time="2025-08-13T09:02:52.939715804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Aug 13 09:02:52.940639 containerd[1509]: time="2025-08-13T09:02:52.940568366Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:52.947111 containerd[1509]: time="2025-08-13T09:02:52.945355423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:02:52.949117 containerd[1509]: time="2025-08-13T09:02:52.948668718Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.726770813s" Aug 13 09:02:52.949117 containerd[1509]: time="2025-08-13T09:02:52.948739645Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 09:02:56.718595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:56.729539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:02:56.790333 systemd[1]: Reloading requested from client PID 2193 ('systemctl') (unit session-11.scope)... Aug 13 09:02:56.790387 systemd[1]: Reloading... Aug 13 09:02:57.045110 zram_generator::config[2232]: No configuration found. Aug 13 09:02:57.134867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 09:02:57.243292 systemd[1]: Reloading finished in 452 ms. Aug 13 09:02:57.307170 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 09:02:57.307326 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 09:02:57.307773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:57.319620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:02:57.601187 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:02:57.610211 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 09:02:57.673792 kubelet[2297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 09:02:57.673792 kubelet[2297]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 09:02:57.673792 kubelet[2297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 09:02:57.674641 kubelet[2297]: I0813 09:02:57.674570 2297 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 09:02:58.279117 kubelet[2297]: I0813 09:02:58.278709 2297 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 09:02:58.279117 kubelet[2297]: I0813 09:02:58.278775 2297 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 09:02:58.279399 kubelet[2297]: I0813 09:02:58.279172 2297 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 09:02:58.315627 kubelet[2297]: E0813 09:02:58.315560 2297 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.18.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:58.316947 kubelet[2297]: I0813 09:02:58.316688 2297 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 09:02:58.337918 kubelet[2297]: E0813 09:02:58.337536 2297 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 09:02:58.337918 kubelet[2297]: I0813 09:02:58.337610 2297 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 09:02:58.347927 kubelet[2297]: I0813 09:02:58.347866 2297 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 09:02:58.352619 kubelet[2297]: I0813 09:02:58.352524 2297 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 09:02:58.352919 kubelet[2297]: I0813 09:02:58.352610 2297 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-cz57v.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 09:02:58.356769 kubelet[2297]: I0813 09:02:58.356630 2297 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 09:02:58.356769 kubelet[2297]: I0813 09:02:58.356728 2297 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 09:02:58.358478 kubelet[2297]: I0813 09:02:58.358452 2297 state_mem.go:36] "Initialized new in-memory state store" Aug 13 09:02:58.362420 kubelet[2297]: I0813 09:02:58.362383 2297 kubelet.go:446] "Attempting to sync node with API server" Aug 13 09:02:58.362589 kubelet[2297]: I0813 09:02:58.362463 2297 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 09:02:58.362589 kubelet[2297]: I0813 09:02:58.362547 2297 kubelet.go:352] "Adding apiserver pod source" Aug 13 09:02:58.362589 kubelet[2297]: I0813 09:02:58.362581 2297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 09:02:58.370311 kubelet[2297]: W0813 09:02:58.370228 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.18.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:58.370491 kubelet[2297]: E0813 09:02:58.370352 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.18.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:58.370807 kubelet[2297]: W0813 09:02:58.370768 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.18.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cz57v.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:58.370895 kubelet[2297]: E0813 09:02:58.370822 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.18.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cz57v.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:58.374111 kubelet[2297]: I0813 09:02:58.373783 2297 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 09:02:58.377449 kubelet[2297]: I0813 09:02:58.377416 2297 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 09:02:58.378194 kubelet[2297]: W0813 09:02:58.377726 2297 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 09:02:58.381101 kubelet[2297]: I0813 09:02:58.378925 2297 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 09:02:58.381101 kubelet[2297]: I0813 09:02:58.378980 2297 server.go:1287] "Started kubelet" Aug 13 09:02:58.381101 kubelet[2297]: I0813 09:02:58.379849 2297 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 09:02:58.381772 kubelet[2297]: I0813 09:02:58.381747 2297 server.go:479] "Adding debug handlers to kubelet server" Aug 13 09:02:58.382725 kubelet[2297]: I0813 09:02:58.382633 2297 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 09:02:58.383811 kubelet[2297]: I0813 09:02:58.383786 2297 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 09:02:58.386484 kubelet[2297]: E0813 09:02:58.383219 2297 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.18.154:6443/api/v1/namespaces/default/events\": dial tcp 10.230.18.154:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-cz57v.gb1.brightbox.com.185b481f242b6a11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-cz57v.gb1.brightbox.com,UID:srv-cz57v.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-cz57v.gb1.brightbox.com,},FirstTimestamp:2025-08-13 09:02:58.378951185 +0000 UTC m=+0.761448007,LastTimestamp:2025-08-13 09:02:58.378951185 +0000 UTC m=+0.761448007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-cz57v.gb1.brightbox.com,}" Aug 13 09:02:58.390620 kubelet[2297]: I0813 09:02:58.389902 2297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 09:02:58.390988 kubelet[2297]: I0813 09:02:58.390958 2297 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 09:02:58.394513 kubelet[2297]: E0813 09:02:58.394454 2297 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-cz57v.gb1.brightbox.com\" not found" Aug 13 09:02:58.394740 kubelet[2297]: I0813 09:02:58.394719 2297 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 09:02:58.395208 kubelet[2297]: I0813 09:02:58.395184 2297 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 09:02:58.395426 kubelet[2297]: I0813 09:02:58.395406 2297 reconciler.go:26] "Reconciler: start to sync state" Aug 13 09:02:58.396147 kubelet[2297]: W0813 09:02:58.396064 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.18.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:58.396297 kubelet[2297]: E0813 09:02:58.396270 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.18.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:58.396745 kubelet[2297]: E0813 09:02:58.396709 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.18.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cz57v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.18.154:6443: connect: connection refused" interval="200ms" Aug 13 09:02:58.401105 kubelet[2297]: I0813 09:02:58.400656 2297 factory.go:221] Registration of the systemd container factory successfully Aug 13 09:02:58.401105 kubelet[2297]: I0813 09:02:58.400783 2297 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 09:02:58.408668 kubelet[2297]: E0813 09:02:58.408554 2297 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 09:02:58.408834 kubelet[2297]: I0813 09:02:58.408743 2297 factory.go:221] Registration of the containerd container factory successfully Aug 13 09:02:58.451416 kubelet[2297]: I0813 09:02:58.451365 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 09:02:58.452209 kubelet[2297]: I0813 09:02:58.452187 2297 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 09:02:58.452417 kubelet[2297]: I0813 09:02:58.452396 2297 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 09:02:58.452562 kubelet[2297]: I0813 09:02:58.452542 2297 state_mem.go:36] "Initialized new in-memory state store" Aug 13 09:02:58.456298 kubelet[2297]: I0813 09:02:58.456261 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 09:02:58.457038 kubelet[2297]: I0813 09:02:58.457005 2297 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 09:02:58.457249 kubelet[2297]: I0813 09:02:58.457216 2297 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 09:02:58.457495 kubelet[2297]: I0813 09:02:58.457372 2297 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 09:02:58.457626 kubelet[2297]: E0813 09:02:58.457478 2297 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 09:02:58.458832 kubelet[2297]: I0813 09:02:58.458479 2297 policy_none.go:49] "None policy: Start" Aug 13 09:02:58.458832 kubelet[2297]: I0813 09:02:58.458517 2297 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 09:02:58.458832 kubelet[2297]: I0813 09:02:58.458547 2297 state_mem.go:35] "Initializing new in-memory state store" Aug 13 09:02:58.460622 kubelet[2297]: W0813 09:02:58.460565 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.18.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:58.461718 kubelet[2297]: E0813 09:02:58.461686 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.18.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:58.473308 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 09:02:58.486631 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 09:02:58.491826 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 09:02:58.495543 kubelet[2297]: E0813 09:02:58.495464 2297 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-cz57v.gb1.brightbox.com\" not found" Aug 13 09:02:58.500110 kubelet[2297]: I0813 09:02:58.499918 2297 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 09:02:58.501006 kubelet[2297]: I0813 09:02:58.500297 2297 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 09:02:58.501006 kubelet[2297]: I0813 09:02:58.500325 2297 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 09:02:58.501875 kubelet[2297]: I0813 09:02:58.501786 2297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 09:02:58.504691 kubelet[2297]: E0813 09:02:58.504459 2297 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 09:02:58.504691 kubelet[2297]: E0813 09:02:58.504556 2297 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-cz57v.gb1.brightbox.com\" not found" Aug 13 09:02:58.575488 systemd[1]: Created slice kubepods-burstable-pod8ca397bad84e973da14398f6f06b212b.slice - libcontainer container kubepods-burstable-pod8ca397bad84e973da14398f6f06b212b.slice. Aug 13 09:02:58.589362 kubelet[2297]: E0813 09:02:58.589309 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.594091 systemd[1]: Created slice kubepods-burstable-podab670d91feb41be075fa97ccdb9ea2c2.slice - libcontainer container kubepods-burstable-podab670d91feb41be075fa97ccdb9ea2c2.slice. Aug 13 09:02:58.595868 kubelet[2297]: I0813 09:02:58.595838 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ca397bad84e973da14398f6f06b212b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-cz57v.gb1.brightbox.com\" (UID: \"8ca397bad84e973da14398f6f06b212b\") " pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596015 kubelet[2297]: I0813 09:02:58.595986 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-ca-certs\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596734 kubelet[2297]: I0813 09:02:58.596184 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ca397bad84e973da14398f6f06b212b-ca-certs\") pod \"kube-apiserver-srv-cz57v.gb1.brightbox.com\" (UID: \"8ca397bad84e973da14398f6f06b212b\") " pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596734 kubelet[2297]: I0813 09:02:58.596224 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ca397bad84e973da14398f6f06b212b-k8s-certs\") pod \"kube-apiserver-srv-cz57v.gb1.brightbox.com\" (UID: \"8ca397bad84e973da14398f6f06b212b\") " pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596734 kubelet[2297]: I0813 09:02:58.596251 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-flexvolume-dir\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596734 kubelet[2297]: I0813 09:02:58.596276 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-k8s-certs\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596734 kubelet[2297]: I0813 09:02:58.596302 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-kubeconfig\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596995 kubelet[2297]: I0813 09:02:58.596328 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.596995 kubelet[2297]: I0813 09:02:58.596379 2297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e18e0b3d34a3d2251c29daac31bc13e-kubeconfig\") pod \"kube-scheduler-srv-cz57v.gb1.brightbox.com\" (UID: \"2e18e0b3d34a3d2251c29daac31bc13e\") " pod="kube-system/kube-scheduler-srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.597152 kubelet[2297]: E0813 09:02:58.597053 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.597912 kubelet[2297]: E0813 09:02:58.597874 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.18.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cz57v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.18.154:6443: connect: connection refused" interval="400ms" Aug 13 09:02:58.601272 systemd[1]: Created slice kubepods-burstable-pod2e18e0b3d34a3d2251c29daac31bc13e.slice - libcontainer container kubepods-burstable-pod2e18e0b3d34a3d2251c29daac31bc13e.slice. Aug 13 09:02:58.604656 kubelet[2297]: I0813 09:02:58.604389 2297 kubelet_node_status.go:75] "Attempting to register node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.605284 kubelet[2297]: E0813 09:02:58.605247 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.605604 kubelet[2297]: E0813 09:02:58.605562 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.18.154:6443/api/v1/nodes\": dial tcp 10.230.18.154:6443: connect: connection refused" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.808576 kubelet[2297]: I0813 09:02:58.808520 2297 kubelet_node_status.go:75] "Attempting to register node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.809176 kubelet[2297]: E0813 09:02:58.808900 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.18.154:6443/api/v1/nodes\": dial tcp 10.230.18.154:6443: connect: connection refused" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:58.891207 containerd[1509]: time="2025-08-13T09:02:58.891051467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-cz57v.gb1.brightbox.com,Uid:8ca397bad84e973da14398f6f06b212b,Namespace:kube-system,Attempt:0,}" Aug 13 09:02:58.906742 containerd[1509]: time="2025-08-13T09:02:58.906682540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-cz57v.gb1.brightbox.com,Uid:2e18e0b3d34a3d2251c29daac31bc13e,Namespace:kube-system,Attempt:0,}" Aug 13 09:02:58.907708 containerd[1509]: time="2025-08-13T09:02:58.907328196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-cz57v.gb1.brightbox.com,Uid:ab670d91feb41be075fa97ccdb9ea2c2,Namespace:kube-system,Attempt:0,}" Aug 13 09:02:58.998995 kubelet[2297]: E0813 09:02:58.998880 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.18.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cz57v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.18.154:6443: connect: connection refused" interval="800ms" Aug 13 09:02:59.213443 kubelet[2297]: I0813 09:02:59.213258 2297 kubelet_node_status.go:75] "Attempting to register node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:59.213858 kubelet[2297]: E0813 09:02:59.213723 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.18.154:6443/api/v1/nodes\": dial tcp 10.230.18.154:6443: connect: connection refused" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:02:59.318932 kubelet[2297]: W0813 09:02:59.318815 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.18.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:59.319169 kubelet[2297]: E0813 09:02:59.318937 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.18.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:59.712987 kubelet[2297]: W0813 09:02:59.712819 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.18.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:59.712987 kubelet[2297]: E0813 09:02:59.712932 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.18.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:59.762372 kubelet[2297]: W0813 09:02:59.762258 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.18.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:59.762372 kubelet[2297]: E0813 09:02:59.762322 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.18.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:59.787372 kubelet[2297]: W0813 09:02:59.787223 2297 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.18.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cz57v.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.18.154:6443: connect: connection refused Aug 13 09:02:59.787372 kubelet[2297]: E0813 09:02:59.787313 2297 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.18.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-cz57v.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:02:59.800533 kubelet[2297]: E0813 09:02:59.800481 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.18.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-cz57v.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.18.154:6443: connect: connection refused" interval="1.6s" Aug 13 09:03:00.017670 kubelet[2297]: I0813 09:03:00.017532 2297 kubelet_node_status.go:75] "Attempting to register node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:00.018416 kubelet[2297]: E0813 09:03:00.018378 2297 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.18.154:6443/api/v1/nodes\": dial tcp 10.230.18.154:6443: connect: connection refused" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:00.197332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount597265100.mount: Deactivated successfully. Aug 13 09:03:00.206179 containerd[1509]: time="2025-08-13T09:03:00.206113590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 09:03:00.209354 containerd[1509]: time="2025-08-13T09:03:00.209259509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 09:03:00.210022 containerd[1509]: time="2025-08-13T09:03:00.209976372Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 09:03:00.211128 containerd[1509]: time="2025-08-13T09:03:00.211063221Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 09:03:00.212129 containerd[1509]: time="2025-08-13T09:03:00.212089390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 13 09:03:00.213903 containerd[1509]: time="2025-08-13T09:03:00.213760164Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 09:03:00.213903 containerd[1509]: time="2025-08-13T09:03:00.213849622Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 09:03:00.217537 containerd[1509]: time="2025-08-13T09:03:00.217487621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.32617995s" Aug 13 09:03:00.220589 containerd[1509]: time="2025-08-13T09:03:00.219759274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 09:03:00.226524 containerd[1509]: time="2025-08-13T09:03:00.226250842Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.318828773s" Aug 13 09:03:00.239111 containerd[1509]: time="2025-08-13T09:03:00.238338816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.331540759s" Aug 13 09:03:00.442168 containerd[1509]: time="2025-08-13T09:03:00.441920033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:03:00.442168 containerd[1509]: time="2025-08-13T09:03:00.442032729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:03:00.442168 containerd[1509]: time="2025-08-13T09:03:00.442059212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:00.446353 containerd[1509]: time="2025-08-13T09:03:00.446017938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:00.448534 containerd[1509]: time="2025-08-13T09:03:00.448355399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:03:00.448534 containerd[1509]: time="2025-08-13T09:03:00.448453447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:03:00.448534 containerd[1509]: time="2025-08-13T09:03:00.448491351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:00.449807 containerd[1509]: time="2025-08-13T09:03:00.449718783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:00.453885 containerd[1509]: time="2025-08-13T09:03:00.453443744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:03:00.453885 containerd[1509]: time="2025-08-13T09:03:00.453525879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:03:00.453885 containerd[1509]: time="2025-08-13T09:03:00.453550528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:00.453885 containerd[1509]: time="2025-08-13T09:03:00.453670118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:00.470101 kubelet[2297]: E0813 09:03:00.468534 2297 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.18.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.18.154:6443: connect: connection refused" logger="UnhandledError" Aug 13 09:03:00.498433 systemd[1]: Started cri-containerd-dc9f5fb594ef3b887ba62aab04d8e4e610fe89e77be34c937f731b78345b6b85.scope - libcontainer container dc9f5fb594ef3b887ba62aab04d8e4e610fe89e77be34c937f731b78345b6b85. Aug 13 09:03:00.525182 systemd[1]: Started cri-containerd-0267a27388c8c77cf8b05b0133d43b607157f12fe18e537f0a0a326316cc5eb1.scope - libcontainer container 0267a27388c8c77cf8b05b0133d43b607157f12fe18e537f0a0a326316cc5eb1. Aug 13 09:03:00.528744 systemd[1]: Started cri-containerd-965cd8911385bb3ca3ebf8cee083e4c125262480284f66edb22a294a65c84cc3.scope - libcontainer container 965cd8911385bb3ca3ebf8cee083e4c125262480284f66edb22a294a65c84cc3. Aug 13 09:03:00.624411 containerd[1509]: time="2025-08-13T09:03:00.624317666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-cz57v.gb1.brightbox.com,Uid:2e18e0b3d34a3d2251c29daac31bc13e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0267a27388c8c77cf8b05b0133d43b607157f12fe18e537f0a0a326316cc5eb1\"" Aug 13 09:03:00.644522 containerd[1509]: time="2025-08-13T09:03:00.644458701Z" level=info msg="CreateContainer within sandbox \"0267a27388c8c77cf8b05b0133d43b607157f12fe18e537f0a0a326316cc5eb1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 09:03:00.661898 containerd[1509]: time="2025-08-13T09:03:00.661828487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-cz57v.gb1.brightbox.com,Uid:ab670d91feb41be075fa97ccdb9ea2c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"965cd8911385bb3ca3ebf8cee083e4c125262480284f66edb22a294a65c84cc3\"" Aug 13 09:03:00.663805 containerd[1509]: time="2025-08-13T09:03:00.663763968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-cz57v.gb1.brightbox.com,Uid:8ca397bad84e973da14398f6f06b212b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc9f5fb594ef3b887ba62aab04d8e4e610fe89e77be34c937f731b78345b6b85\"" Aug 13 09:03:00.669019 containerd[1509]: time="2025-08-13T09:03:00.668907345Z" level=info msg="CreateContainer within sandbox \"965cd8911385bb3ca3ebf8cee083e4c125262480284f66edb22a294a65c84cc3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 09:03:00.671056 containerd[1509]: time="2025-08-13T09:03:00.670922941Z" level=info msg="CreateContainer within sandbox \"dc9f5fb594ef3b887ba62aab04d8e4e610fe89e77be34c937f731b78345b6b85\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 09:03:00.690318 containerd[1509]: time="2025-08-13T09:03:00.690243088Z" level=info msg="CreateContainer within sandbox \"dc9f5fb594ef3b887ba62aab04d8e4e610fe89e77be34c937f731b78345b6b85\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26a681b403d62faede0cd003701d7221fb1591a9c60cd9c7b4dd24aec675a811\"" Aug 13 09:03:00.691632 containerd[1509]: time="2025-08-13T09:03:00.691532378Z" level=info msg="StartContainer for \"26a681b403d62faede0cd003701d7221fb1591a9c60cd9c7b4dd24aec675a811\"" Aug 13 09:03:00.698214 containerd[1509]: time="2025-08-13T09:03:00.696610981Z" level=info msg="CreateContainer within sandbox \"965cd8911385bb3ca3ebf8cee083e4c125262480284f66edb22a294a65c84cc3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59ce80ec5c1dc4eead4d81e7b1bf668c12fa22c65d91022c5cb71f47ed573d07\"" Aug 13 09:03:00.698214 containerd[1509]: time="2025-08-13T09:03:00.697609753Z" level=info msg="StartContainer for \"59ce80ec5c1dc4eead4d81e7b1bf668c12fa22c65d91022c5cb71f47ed573d07\"" Aug 13 09:03:00.700590 containerd[1509]: time="2025-08-13T09:03:00.698708229Z" level=info msg="CreateContainer within sandbox \"0267a27388c8c77cf8b05b0133d43b607157f12fe18e537f0a0a326316cc5eb1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"98cf49f86bd0a5102149399f6fbec3d3137d0fc5214cd244bbfad4628778853f\"" Aug 13 09:03:00.700590 containerd[1509]: time="2025-08-13T09:03:00.699306293Z" level=info msg="StartContainer for \"98cf49f86bd0a5102149399f6fbec3d3137d0fc5214cd244bbfad4628778853f\"" Aug 13 09:03:00.748347 systemd[1]: Started cri-containerd-59ce80ec5c1dc4eead4d81e7b1bf668c12fa22c65d91022c5cb71f47ed573d07.scope - libcontainer container 59ce80ec5c1dc4eead4d81e7b1bf668c12fa22c65d91022c5cb71f47ed573d07. Aug 13 09:03:00.766421 systemd[1]: Started cri-containerd-26a681b403d62faede0cd003701d7221fb1591a9c60cd9c7b4dd24aec675a811.scope - libcontainer container 26a681b403d62faede0cd003701d7221fb1591a9c60cd9c7b4dd24aec675a811. Aug 13 09:03:00.778305 systemd[1]: Started cri-containerd-98cf49f86bd0a5102149399f6fbec3d3137d0fc5214cd244bbfad4628778853f.scope - libcontainer container 98cf49f86bd0a5102149399f6fbec3d3137d0fc5214cd244bbfad4628778853f. Aug 13 09:03:00.869479 containerd[1509]: time="2025-08-13T09:03:00.869424993Z" level=info msg="StartContainer for \"26a681b403d62faede0cd003701d7221fb1591a9c60cd9c7b4dd24aec675a811\" returns successfully" Aug 13 09:03:00.869669 containerd[1509]: time="2025-08-13T09:03:00.869567464Z" level=info msg="StartContainer for \"98cf49f86bd0a5102149399f6fbec3d3137d0fc5214cd244bbfad4628778853f\" returns successfully" Aug 13 09:03:00.887777 containerd[1509]: time="2025-08-13T09:03:00.887720514Z" level=info msg="StartContainer for \"59ce80ec5c1dc4eead4d81e7b1bf668c12fa22c65d91022c5cb71f47ed573d07\" returns successfully" Aug 13 09:03:01.483126 kubelet[2297]: E0813 09:03:01.483050 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:01.485632 kubelet[2297]: E0813 09:03:01.484233 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:01.488694 kubelet[2297]: E0813 09:03:01.488667 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:01.622032 kubelet[2297]: I0813 09:03:01.621953 2297 kubelet_node_status.go:75] "Attempting to register node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:02.493296 kubelet[2297]: E0813 09:03:02.492631 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:02.493296 kubelet[2297]: E0813 09:03:02.493110 2297 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-cz57v.gb1.brightbox.com\" not found" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:03.916622 kubelet[2297]: I0813 09:03:03.916311 2297 kubelet_node_status.go:78] "Successfully registered node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:03.916622 kubelet[2297]: E0813 09:03:03.916388 2297 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-cz57v.gb1.brightbox.com\": node \"srv-cz57v.gb1.brightbox.com\" not found" Aug 13 09:03:03.997363 kubelet[2297]: I0813 09:03:03.997210 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:04.045362 kubelet[2297]: E0813 09:03:04.042909 2297 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-cz57v.gb1.brightbox.com.185b481f242b6a11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-cz57v.gb1.brightbox.com,UID:srv-cz57v.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-cz57v.gb1.brightbox.com,},FirstTimestamp:2025-08-13 09:02:58.378951185 +0000 UTC m=+0.761448007,LastTimestamp:2025-08-13 09:02:58.378951185 +0000 UTC m=+0.761448007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-cz57v.gb1.brightbox.com,}" Aug 13 09:03:04.045362 kubelet[2297]: E0813 09:03:04.043403 2297 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Aug 13 09:03:04.060054 kubelet[2297]: E0813 09:03:04.060009 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-cz57v.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:04.060317 kubelet[2297]: I0813 09:03:04.060291 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:04.064475 kubelet[2297]: E0813 09:03:04.064246 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:04.064475 kubelet[2297]: I0813 09:03:04.064282 2297 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:04.068885 kubelet[2297]: E0813 09:03:04.068553 2297 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-cz57v.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:04.370617 kubelet[2297]: I0813 09:03:04.369736 2297 apiserver.go:52] "Watching apiserver" Aug 13 09:03:04.396586 kubelet[2297]: I0813 09:03:04.396342 2297 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 09:03:06.115827 systemd[1]: Reloading requested from client PID 2578 ('systemctl') (unit session-11.scope)... Aug 13 09:03:06.115876 systemd[1]: Reloading... Aug 13 09:03:06.245027 zram_generator::config[2617]: No configuration found. Aug 13 09:03:06.438357 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 09:03:06.576631 systemd[1]: Reloading finished in 460 ms. Aug 13 09:03:06.641278 kubelet[2297]: I0813 09:03:06.641213 2297 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 09:03:06.643314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:03:06.658240 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 09:03:06.658922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:03:06.659235 systemd[1]: kubelet.service: Consumed 1.341s CPU time, 130.2M memory peak, 0B memory swap peak. Aug 13 09:03:06.666724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 09:03:06.896641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 09:03:06.908558 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 09:03:07.093752 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 09:03:07.096090 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 09:03:07.096090 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 09:03:07.096090 kubelet[2681]: I0813 09:03:07.094448 2681 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 09:03:07.105587 kubelet[2681]: I0813 09:03:07.104745 2681 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 09:03:07.105587 kubelet[2681]: I0813 09:03:07.104781 2681 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 09:03:07.105587 kubelet[2681]: I0813 09:03:07.105186 2681 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 09:03:07.109264 kubelet[2681]: I0813 09:03:07.109224 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 09:03:07.128034 kubelet[2681]: I0813 09:03:07.126420 2681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 09:03:07.133551 kubelet[2681]: E0813 09:03:07.133494 2681 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 09:03:07.133551 kubelet[2681]: I0813 09:03:07.133548 2681 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 09:03:07.140788 kubelet[2681]: I0813 09:03:07.140631 2681 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 09:03:07.141935 kubelet[2681]: I0813 09:03:07.141046 2681 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 09:03:07.141935 kubelet[2681]: I0813 09:03:07.141141 2681 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-cz57v.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 09:03:07.141935 kubelet[2681]: I0813 09:03:07.141462 2681 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 09:03:07.141935 kubelet[2681]: I0813 09:03:07.141482 2681 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 09:03:07.142491 kubelet[2681]: I0813 09:03:07.141591 2681 state_mem.go:36] "Initialized new in-memory state store" Aug 13 09:03:07.142491 kubelet[2681]: I0813 09:03:07.141899 2681 kubelet.go:446] "Attempting to sync node with API server" Aug 13 09:03:07.142491 kubelet[2681]: I0813 09:03:07.141954 2681 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 09:03:07.142491 kubelet[2681]: I0813 09:03:07.142014 2681 kubelet.go:352] "Adding apiserver pod source" Aug 13 09:03:07.142491 kubelet[2681]: I0813 09:03:07.142047 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 09:03:07.149211 kubelet[2681]: I0813 09:03:07.148236 2681 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 09:03:07.149211 kubelet[2681]: I0813 09:03:07.148897 2681 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 09:03:07.150139 kubelet[2681]: I0813 09:03:07.150029 2681 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 09:03:07.151223 kubelet[2681]: I0813 09:03:07.151201 2681 server.go:1287] "Started kubelet" Aug 13 09:03:07.153529 kubelet[2681]: I0813 09:03:07.153459 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 09:03:07.157089 kubelet[2681]: I0813 09:03:07.154337 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 09:03:07.157499 kubelet[2681]: I0813 09:03:07.157466 2681 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 09:03:07.157866 kubelet[2681]: I0813 09:03:07.157842 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 09:03:07.174574 kubelet[2681]: I0813 09:03:07.174413 2681 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 09:03:07.181254 kubelet[2681]: I0813 09:03:07.179216 2681 server.go:479] "Adding debug handlers to kubelet server" Aug 13 09:03:07.181254 kubelet[2681]: I0813 09:03:07.180596 2681 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 09:03:07.181254 kubelet[2681]: E0813 09:03:07.181184 2681 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-cz57v.gb1.brightbox.com\" not found" Aug 13 09:03:07.182335 kubelet[2681]: I0813 09:03:07.181702 2681 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 09:03:07.182335 kubelet[2681]: I0813 09:03:07.182116 2681 reconciler.go:26] "Reconciler: start to sync state" Aug 13 09:03:07.206336 kubelet[2681]: I0813 09:03:07.206294 2681 factory.go:221] Registration of the systemd container factory successfully Aug 13 09:03:07.206741 kubelet[2681]: I0813 09:03:07.206695 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 09:03:07.214295 kubelet[2681]: E0813 09:03:07.213967 2681 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 09:03:07.216034 kubelet[2681]: I0813 09:03:07.216009 2681 factory.go:221] Registration of the containerd container factory successfully Aug 13 09:03:07.219387 kubelet[2681]: I0813 09:03:07.219322 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 09:03:07.224292 kubelet[2681]: I0813 09:03:07.224247 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 09:03:07.224564 kubelet[2681]: I0813 09:03:07.224544 2681 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 09:03:07.224711 kubelet[2681]: I0813 09:03:07.224688 2681 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 09:03:07.224837 kubelet[2681]: I0813 09:03:07.224819 2681 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 09:03:07.226538 kubelet[2681]: E0813 09:03:07.226402 2681 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 09:03:07.324821 kubelet[2681]: I0813 09:03:07.324786 2681 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325131 2681 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325180 2681 state_mem.go:36] "Initialized new in-memory state store" Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325436 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325464 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325508 2681 policy_none.go:49] "None policy: Start" Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325552 2681 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325587 2681 state_mem.go:35] "Initializing new in-memory state store" Aug 13 09:03:07.325941 kubelet[2681]: I0813 09:03:07.325792 2681 state_mem.go:75] "Updated machine memory state" Aug 13 09:03:07.328217 kubelet[2681]: E0813 09:03:07.328147 2681 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 09:03:07.336845 kubelet[2681]: I0813 09:03:07.334567 2681 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 09:03:07.336845 kubelet[2681]: I0813 09:03:07.334885 2681 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 09:03:07.336845 kubelet[2681]: I0813 09:03:07.334911 2681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 09:03:07.336845 kubelet[2681]: I0813 09:03:07.335461 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 09:03:07.341114 kubelet[2681]: E0813 09:03:07.340220 2681 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 09:03:07.475723 kubelet[2681]: I0813 09:03:07.474291 2681 kubelet_node_status.go:75] "Attempting to register node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.489851 kubelet[2681]: I0813 09:03:07.489236 2681 kubelet_node_status.go:124] "Node was previously registered" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.489851 kubelet[2681]: I0813 09:03:07.489369 2681 kubelet_node_status.go:78] "Successfully registered node" node="srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.530058 kubelet[2681]: I0813 09:03:07.529507 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.532415 kubelet[2681]: I0813 09:03:07.532384 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.533392 kubelet[2681]: I0813 09:03:07.532740 2681 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.538118 kubelet[2681]: W0813 09:03:07.537679 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 09:03:07.541865 kubelet[2681]: W0813 09:03:07.541152 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 09:03:07.543224 kubelet[2681]: W0813 09:03:07.543191 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 09:03:07.585371 kubelet[2681]: I0813 09:03:07.584851 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ca397bad84e973da14398f6f06b212b-ca-certs\") pod \"kube-apiserver-srv-cz57v.gb1.brightbox.com\" (UID: \"8ca397bad84e973da14398f6f06b212b\") " pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585371 kubelet[2681]: I0813 09:03:07.584925 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ca397bad84e973da14398f6f06b212b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-cz57v.gb1.brightbox.com\" (UID: \"8ca397bad84e973da14398f6f06b212b\") " pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585371 kubelet[2681]: I0813 09:03:07.584970 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-flexvolume-dir\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585371 kubelet[2681]: I0813 09:03:07.585000 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585371 kubelet[2681]: I0813 09:03:07.585029 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e18e0b3d34a3d2251c29daac31bc13e-kubeconfig\") pod \"kube-scheduler-srv-cz57v.gb1.brightbox.com\" (UID: \"2e18e0b3d34a3d2251c29daac31bc13e\") " pod="kube-system/kube-scheduler-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585858 kubelet[2681]: I0813 09:03:07.585056 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8ca397bad84e973da14398f6f06b212b-k8s-certs\") pod \"kube-apiserver-srv-cz57v.gb1.brightbox.com\" (UID: \"8ca397bad84e973da14398f6f06b212b\") " pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585858 kubelet[2681]: I0813 09:03:07.585105 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-ca-certs\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585858 kubelet[2681]: I0813 09:03:07.585137 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-k8s-certs\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:07.585858 kubelet[2681]: I0813 09:03:07.585170 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab670d91feb41be075fa97ccdb9ea2c2-kubeconfig\") pod \"kube-controller-manager-srv-cz57v.gb1.brightbox.com\" (UID: \"ab670d91feb41be075fa97ccdb9ea2c2\") " pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" Aug 13 09:03:08.149134 kubelet[2681]: I0813 09:03:08.148780 2681 apiserver.go:52] "Watching apiserver" Aug 13 09:03:08.182800 kubelet[2681]: I0813 09:03:08.182734 2681 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 09:03:08.323552 kubelet[2681]: I0813 09:03:08.323418 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-cz57v.gb1.brightbox.com" podStartSLOduration=1.3233311620000001 podStartE2EDuration="1.323331162s" podCreationTimestamp="2025-08-13 09:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 09:03:08.316927216 +0000 UTC m=+1.306942624" watchObservedRunningTime="2025-08-13 09:03:08.323331162 +0000 UTC m=+1.313346558" Aug 13 09:03:08.351397 kubelet[2681]: I0813 09:03:08.351201 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-cz57v.gb1.brightbox.com" podStartSLOduration=1.3511624979999999 podStartE2EDuration="1.351162498s" podCreationTimestamp="2025-08-13 09:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 09:03:08.349376767 +0000 UTC m=+1.339392178" watchObservedRunningTime="2025-08-13 09:03:08.351162498 +0000 UTC m=+1.341177896" Aug 13 09:03:08.411195 kubelet[2681]: I0813 09:03:08.409428 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-cz57v.gb1.brightbox.com" podStartSLOduration=1.409408027 podStartE2EDuration="1.409408027s" podCreationTimestamp="2025-08-13 09:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 09:03:08.408277575 +0000 UTC m=+1.398292995" watchObservedRunningTime="2025-08-13 09:03:08.409408027 +0000 UTC m=+1.399423442" Aug 13 09:03:12.671971 kubelet[2681]: I0813 09:03:12.671868 2681 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 09:03:12.674194 containerd[1509]: time="2025-08-13T09:03:12.674017860Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 09:03:12.676128 kubelet[2681]: I0813 09:03:12.674504 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 09:03:13.680261 systemd[1]: Created slice kubepods-besteffort-pod8ab14722_31e7_48f9_990c_8014625bf99d.slice - libcontainer container kubepods-besteffort-pod8ab14722_31e7_48f9_990c_8014625bf99d.slice. Aug 13 09:03:13.723134 kubelet[2681]: I0813 09:03:13.723020 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ab14722-31e7-48f9-990c-8014625bf99d-lib-modules\") pod \"kube-proxy-gdmzn\" (UID: \"8ab14722-31e7-48f9-990c-8014625bf99d\") " pod="kube-system/kube-proxy-gdmzn" Aug 13 09:03:13.724392 kubelet[2681]: I0813 09:03:13.723871 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxlt\" (UniqueName: \"kubernetes.io/projected/8ab14722-31e7-48f9-990c-8014625bf99d-kube-api-access-vbxlt\") pod \"kube-proxy-gdmzn\" (UID: \"8ab14722-31e7-48f9-990c-8014625bf99d\") " pod="kube-system/kube-proxy-gdmzn" Aug 13 09:03:13.724392 kubelet[2681]: I0813 09:03:13.723960 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ab14722-31e7-48f9-990c-8014625bf99d-kube-proxy\") pod \"kube-proxy-gdmzn\" (UID: \"8ab14722-31e7-48f9-990c-8014625bf99d\") " pod="kube-system/kube-proxy-gdmzn" Aug 13 09:03:13.724392 kubelet[2681]: I0813 09:03:13.724025 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ab14722-31e7-48f9-990c-8014625bf99d-xtables-lock\") pod \"kube-proxy-gdmzn\" (UID: \"8ab14722-31e7-48f9-990c-8014625bf99d\") " pod="kube-system/kube-proxy-gdmzn" Aug 13 09:03:13.808192 systemd[1]: Created slice kubepods-besteffort-pod04658c2e_2258_4b28_82fd_fd85433c83b0.slice - libcontainer container kubepods-besteffort-pod04658c2e_2258_4b28_82fd_fd85433c83b0.slice. Aug 13 09:03:13.825277 kubelet[2681]: I0813 09:03:13.824388 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04658c2e-2258-4b28-82fd-fd85433c83b0-var-lib-calico\") pod \"tigera-operator-747864d56d-lr2sm\" (UID: \"04658c2e-2258-4b28-82fd-fd85433c83b0\") " pod="tigera-operator/tigera-operator-747864d56d-lr2sm" Aug 13 09:03:13.825277 kubelet[2681]: I0813 09:03:13.824441 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c57b\" (UniqueName: \"kubernetes.io/projected/04658c2e-2258-4b28-82fd-fd85433c83b0-kube-api-access-4c57b\") pod \"tigera-operator-747864d56d-lr2sm\" (UID: \"04658c2e-2258-4b28-82fd-fd85433c83b0\") " pod="tigera-operator/tigera-operator-747864d56d-lr2sm" Aug 13 09:03:13.995335 containerd[1509]: time="2025-08-13T09:03:13.995046431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gdmzn,Uid:8ab14722-31e7-48f9-990c-8014625bf99d,Namespace:kube-system,Attempt:0,}" Aug 13 09:03:14.035809 containerd[1509]: time="2025-08-13T09:03:14.035385276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:03:14.035809 containerd[1509]: time="2025-08-13T09:03:14.035532707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:03:14.035809 containerd[1509]: time="2025-08-13T09:03:14.035584451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:14.036438 containerd[1509]: time="2025-08-13T09:03:14.036256494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:14.087378 systemd[1]: Started cri-containerd-ca23b509c6f5a5493f1f996b9ef0d148a33ae269d949aed1352beee73a31de49.scope - libcontainer container ca23b509c6f5a5493f1f996b9ef0d148a33ae269d949aed1352beee73a31de49. Aug 13 09:03:14.119052 containerd[1509]: time="2025-08-13T09:03:14.118460577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lr2sm,Uid:04658c2e-2258-4b28-82fd-fd85433c83b0,Namespace:tigera-operator,Attempt:0,}" Aug 13 09:03:14.126256 containerd[1509]: time="2025-08-13T09:03:14.126204640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gdmzn,Uid:8ab14722-31e7-48f9-990c-8014625bf99d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca23b509c6f5a5493f1f996b9ef0d148a33ae269d949aed1352beee73a31de49\"" Aug 13 09:03:14.132667 containerd[1509]: time="2025-08-13T09:03:14.132477133Z" level=info msg="CreateContainer within sandbox \"ca23b509c6f5a5493f1f996b9ef0d148a33ae269d949aed1352beee73a31de49\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 09:03:14.166260 containerd[1509]: time="2025-08-13T09:03:14.163835307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:03:14.166260 containerd[1509]: time="2025-08-13T09:03:14.163904315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:03:14.166260 containerd[1509]: time="2025-08-13T09:03:14.163932947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:14.166260 containerd[1509]: time="2025-08-13T09:03:14.164046421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:14.175175 containerd[1509]: time="2025-08-13T09:03:14.174823198Z" level=info msg="CreateContainer within sandbox \"ca23b509c6f5a5493f1f996b9ef0d148a33ae269d949aed1352beee73a31de49\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"27ee6d1c61c018d8c59b2390c9503c30ee487c7b6879ede1144f64c26369502f\"" Aug 13 09:03:14.177105 containerd[1509]: time="2025-08-13T09:03:14.175952133Z" level=info msg="StartContainer for \"27ee6d1c61c018d8c59b2390c9503c30ee487c7b6879ede1144f64c26369502f\"" Aug 13 09:03:14.205535 systemd[1]: Started cri-containerd-497ff909bc1ea45bcc54818d9d3d59f11ed6faaa90430fa88ed7d9f021f8bae7.scope - libcontainer container 497ff909bc1ea45bcc54818d9d3d59f11ed6faaa90430fa88ed7d9f021f8bae7. Aug 13 09:03:14.244364 systemd[1]: Started cri-containerd-27ee6d1c61c018d8c59b2390c9503c30ee487c7b6879ede1144f64c26369502f.scope - libcontainer container 27ee6d1c61c018d8c59b2390c9503c30ee487c7b6879ede1144f64c26369502f. Aug 13 09:03:14.307763 containerd[1509]: time="2025-08-13T09:03:14.307265269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-lr2sm,Uid:04658c2e-2258-4b28-82fd-fd85433c83b0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"497ff909bc1ea45bcc54818d9d3d59f11ed6faaa90430fa88ed7d9f021f8bae7\"" Aug 13 09:03:14.311854 containerd[1509]: time="2025-08-13T09:03:14.311448255Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 09:03:14.321681 containerd[1509]: time="2025-08-13T09:03:14.321633341Z" level=info msg="StartContainer for \"27ee6d1c61c018d8c59b2390c9503c30ee487c7b6879ede1144f64c26369502f\" returns successfully" Aug 13 09:03:14.858551 systemd[1]: run-containerd-runc-k8s.io-ca23b509c6f5a5493f1f996b9ef0d148a33ae269d949aed1352beee73a31de49-runc.9RFV7X.mount: Deactivated successfully. Aug 13 09:03:15.318859 kubelet[2681]: I0813 09:03:15.318302 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gdmzn" podStartSLOduration=2.31824403 podStartE2EDuration="2.31824403s" podCreationTimestamp="2025-08-13 09:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 09:03:15.31726862 +0000 UTC m=+8.307284027" watchObservedRunningTime="2025-08-13 09:03:15.31824403 +0000 UTC m=+8.308259435" Aug 13 09:03:16.432286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1108409513.mount: Deactivated successfully. Aug 13 09:03:17.944703 containerd[1509]: time="2025-08-13T09:03:17.944535247Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:17.946557 containerd[1509]: time="2025-08-13T09:03:17.946208890Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 09:03:17.947902 containerd[1509]: time="2025-08-13T09:03:17.947559144Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:17.952325 containerd[1509]: time="2025-08-13T09:03:17.952262330Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:17.954280 containerd[1509]: time="2025-08-13T09:03:17.954242250Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.642740161s" Aug 13 09:03:17.954448 containerd[1509]: time="2025-08-13T09:03:17.954418467Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 09:03:17.959505 containerd[1509]: time="2025-08-13T09:03:17.959242326Z" level=info msg="CreateContainer within sandbox \"497ff909bc1ea45bcc54818d9d3d59f11ed6faaa90430fa88ed7d9f021f8bae7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 09:03:17.979461 containerd[1509]: time="2025-08-13T09:03:17.979401356Z" level=info msg="CreateContainer within sandbox \"497ff909bc1ea45bcc54818d9d3d59f11ed6faaa90430fa88ed7d9f021f8bae7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"67fdbf4193e2b475603619157688f0788c4af0c42503cd4a4d9b30635e6ee963\"" Aug 13 09:03:17.980316 containerd[1509]: time="2025-08-13T09:03:17.980202684Z" level=info msg="StartContainer for \"67fdbf4193e2b475603619157688f0788c4af0c42503cd4a4d9b30635e6ee963\"" Aug 13 09:03:18.031375 systemd[1]: Started cri-containerd-67fdbf4193e2b475603619157688f0788c4af0c42503cd4a4d9b30635e6ee963.scope - libcontainer container 67fdbf4193e2b475603619157688f0788c4af0c42503cd4a4d9b30635e6ee963. Aug 13 09:03:18.080340 containerd[1509]: time="2025-08-13T09:03:18.079760290Z" level=info msg="StartContainer for \"67fdbf4193e2b475603619157688f0788c4af0c42503cd4a4d9b30635e6ee963\" returns successfully" Aug 13 09:03:18.350977 kubelet[2681]: I0813 09:03:18.350105 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-lr2sm" podStartSLOduration=1.703468876 podStartE2EDuration="5.350046365s" podCreationTimestamp="2025-08-13 09:03:13 +0000 UTC" firstStartedPulling="2025-08-13 09:03:14.310682035 +0000 UTC m=+7.300697437" lastFinishedPulling="2025-08-13 09:03:17.957259533 +0000 UTC m=+10.947274926" observedRunningTime="2025-08-13 09:03:18.34924735 +0000 UTC m=+11.339262759" watchObservedRunningTime="2025-08-13 09:03:18.350046365 +0000 UTC m=+11.340061770" Aug 13 09:03:25.971565 sudo[1776]: pam_unix(sudo:session): session closed for user root Aug 13 09:03:26.121744 sshd[1773]: pam_unix(sshd:session): session closed for user core Aug 13 09:03:26.130924 systemd[1]: sshd@8-10.230.18.154:22-139.178.68.195:38202.service: Deactivated successfully. Aug 13 09:03:26.139433 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 09:03:26.140015 systemd[1]: session-11.scope: Consumed 6.600s CPU time, 142.4M memory peak, 0B memory swap peak. Aug 13 09:03:26.142302 systemd-logind[1492]: Session 11 logged out. Waiting for processes to exit. Aug 13 09:03:26.145433 systemd-logind[1492]: Removed session 11. Aug 13 09:03:30.847524 kubelet[2681]: I0813 09:03:30.847242 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2-typha-certs\") pod \"calico-typha-97bd97c76-497pb\" (UID: \"a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2\") " pod="calico-system/calico-typha-97bd97c76-497pb" Aug 13 09:03:30.847524 kubelet[2681]: I0813 09:03:30.847348 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2-tigera-ca-bundle\") pod \"calico-typha-97bd97c76-497pb\" (UID: \"a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2\") " pod="calico-system/calico-typha-97bd97c76-497pb" Aug 13 09:03:30.847524 kubelet[2681]: I0813 09:03:30.847403 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrddx\" (UniqueName: \"kubernetes.io/projected/a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2-kube-api-access-xrddx\") pod \"calico-typha-97bd97c76-497pb\" (UID: \"a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2\") " pod="calico-system/calico-typha-97bd97c76-497pb" Aug 13 09:03:30.861692 systemd[1]: Created slice kubepods-besteffort-poda61a7cf5_4d9d_4b12_a44f_5acff18d6cd2.slice - libcontainer container kubepods-besteffort-poda61a7cf5_4d9d_4b12_a44f_5acff18d6cd2.slice. Aug 13 09:03:31.092396 systemd[1]: Created slice kubepods-besteffort-pod77e25716_a0e7_4ee5_b89a_daad4d940a10.slice - libcontainer container kubepods-besteffort-pod77e25716_a0e7_4ee5_b89a_daad4d940a10.slice. Aug 13 09:03:31.152424 kubelet[2681]: I0813 09:03:31.152210 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-var-run-calico\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.152424 kubelet[2681]: I0813 09:03:31.152281 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-cni-net-dir\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.152424 kubelet[2681]: I0813 09:03:31.152317 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-policysync\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.152424 kubelet[2681]: I0813 09:03:31.152347 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/77e25716-a0e7-4ee5-b89a-daad4d940a10-tigera-ca-bundle\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.152424 kubelet[2681]: I0813 09:03:31.152381 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-flexvol-driver-host\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.154179 kubelet[2681]: I0813 09:03:31.152412 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-xtables-lock\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.154179 kubelet[2681]: I0813 09:03:31.152443 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-cni-log-dir\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.154179 kubelet[2681]: I0813 09:03:31.152470 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-cni-bin-dir\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.154179 kubelet[2681]: I0813 09:03:31.152510 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-lib-modules\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.154179 kubelet[2681]: I0813 09:03:31.152540 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/77e25716-a0e7-4ee5-b89a-daad4d940a10-node-certs\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.154427 kubelet[2681]: I0813 09:03:31.152565 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/77e25716-a0e7-4ee5-b89a-daad4d940a10-var-lib-calico\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.154427 kubelet[2681]: I0813 09:03:31.152764 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fbs6\" (UniqueName: \"kubernetes.io/projected/77e25716-a0e7-4ee5-b89a-daad4d940a10-kube-api-access-2fbs6\") pod \"calico-node-4z2mc\" (UID: \"77e25716-a0e7-4ee5-b89a-daad4d940a10\") " pod="calico-system/calico-node-4z2mc" Aug 13 09:03:31.178570 containerd[1509]: time="2025-08-13T09:03:31.178462806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-97bd97c76-497pb,Uid:a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2,Namespace:calico-system,Attempt:0,}" Aug 13 09:03:31.283153 kubelet[2681]: E0813 09:03:31.280922 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.283153 kubelet[2681]: W0813 09:03:31.280970 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.285852 kubelet[2681]: E0813 09:03:31.285619 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.287006 kubelet[2681]: E0813 09:03:31.286974 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.287006 kubelet[2681]: W0813 09:03:31.287000 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.287955 kubelet[2681]: E0813 09:03:31.287022 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.293105 kubelet[2681]: E0813 09:03:31.288465 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.293105 kubelet[2681]: W0813 09:03:31.288487 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.293105 kubelet[2681]: E0813 09:03:31.288701 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.329762 containerd[1509]: time="2025-08-13T09:03:31.329260676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:03:31.330115 containerd[1509]: time="2025-08-13T09:03:31.329808518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:03:31.330115 containerd[1509]: time="2025-08-13T09:03:31.329835882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:31.331213 containerd[1509]: time="2025-08-13T09:03:31.330290557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:31.405148 containerd[1509]: time="2025-08-13T09:03:31.403669845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4z2mc,Uid:77e25716-a0e7-4ee5-b89a-daad4d940a10,Namespace:calico-system,Attempt:0,}" Aug 13 09:03:31.412961 kubelet[2681]: E0813 09:03:31.412871 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:31.452368 kubelet[2681]: E0813 09:03:31.450733 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.452368 kubelet[2681]: W0813 09:03:31.450810 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.452368 kubelet[2681]: E0813 09:03:31.450846 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.452368 kubelet[2681]: E0813 09:03:31.451271 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.452368 kubelet[2681]: W0813 09:03:31.451286 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.452368 kubelet[2681]: E0813 09:03:31.451343 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.452368 kubelet[2681]: E0813 09:03:31.452025 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.452368 kubelet[2681]: W0813 09:03:31.452041 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.452368 kubelet[2681]: E0813 09:03:31.452058 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.460270 kubelet[2681]: E0813 09:03:31.457921 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.460270 kubelet[2681]: W0813 09:03:31.460141 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.460361 systemd[1]: Started cri-containerd-b683b182cc35fd8f3a4e5217acd8d1391ece885ec0193282eb08516cbbc6870e.scope - libcontainer container b683b182cc35fd8f3a4e5217acd8d1391ece885ec0193282eb08516cbbc6870e. Aug 13 09:03:31.462258 kubelet[2681]: E0813 09:03:31.461389 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.464840 kubelet[2681]: E0813 09:03:31.464649 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.465267 kubelet[2681]: W0813 09:03:31.464664 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.465267 kubelet[2681]: E0813 09:03:31.465121 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.468101 kubelet[2681]: E0813 09:03:31.467656 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.468101 kubelet[2681]: W0813 09:03:31.467698 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.468101 kubelet[2681]: E0813 09:03:31.467721 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.469486 kubelet[2681]: E0813 09:03:31.469454 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.469627 kubelet[2681]: W0813 09:03:31.469603 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.469843 kubelet[2681]: E0813 09:03:31.469772 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.472109 kubelet[2681]: E0813 09:03:31.471747 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.472109 kubelet[2681]: W0813 09:03:31.471982 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.472109 kubelet[2681]: E0813 09:03:31.472002 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.477061 kubelet[2681]: E0813 09:03:31.476278 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.477061 kubelet[2681]: W0813 09:03:31.476935 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.477061 kubelet[2681]: E0813 09:03:31.476962 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.491035 kubelet[2681]: E0813 09:03:31.488592 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.491035 kubelet[2681]: W0813 09:03:31.488633 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.491035 kubelet[2681]: E0813 09:03:31.488688 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.491035 kubelet[2681]: E0813 09:03:31.489558 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.491035 kubelet[2681]: W0813 09:03:31.489576 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.491035 kubelet[2681]: E0813 09:03:31.489592 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.491035 kubelet[2681]: E0813 09:03:31.490437 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.491035 kubelet[2681]: W0813 09:03:31.490454 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.491035 kubelet[2681]: E0813 09:03:31.490470 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.491637 kubelet[2681]: E0813 09:03:31.491607 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.491637 kubelet[2681]: W0813 09:03:31.491622 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.491753 kubelet[2681]: E0813 09:03:31.491639 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.492885 kubelet[2681]: E0813 09:03:31.492439 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.492885 kubelet[2681]: W0813 09:03:31.492463 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.492885 kubelet[2681]: E0813 09:03:31.492480 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.494874 kubelet[2681]: E0813 09:03:31.494272 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.494874 kubelet[2681]: W0813 09:03:31.494294 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.494874 kubelet[2681]: E0813 09:03:31.494311 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.494874 kubelet[2681]: E0813 09:03:31.494566 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.494874 kubelet[2681]: W0813 09:03:31.494580 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.494874 kubelet[2681]: E0813 09:03:31.494595 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.496582 kubelet[2681]: E0813 09:03:31.495445 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.496582 kubelet[2681]: W0813 09:03:31.495463 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.496582 kubelet[2681]: E0813 09:03:31.495480 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.496582 kubelet[2681]: E0813 09:03:31.496421 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.496582 kubelet[2681]: W0813 09:03:31.496437 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.496582 kubelet[2681]: E0813 09:03:31.496455 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.498139 kubelet[2681]: E0813 09:03:31.497286 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.498139 kubelet[2681]: W0813 09:03:31.497308 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.498139 kubelet[2681]: E0813 09:03:31.497325 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.498355 kubelet[2681]: E0813 09:03:31.498266 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.498355 kubelet[2681]: W0813 09:03:31.498281 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.498355 kubelet[2681]: E0813 09:03:31.498297 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.501199 kubelet[2681]: E0813 09:03:31.500235 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.501199 kubelet[2681]: W0813 09:03:31.500258 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.501199 kubelet[2681]: E0813 09:03:31.500276 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.501199 kubelet[2681]: I0813 09:03:31.500305 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/acc500b1-7473-42bd-b48d-00d555107b78-registration-dir\") pod \"csi-node-driver-g96xv\" (UID: \"acc500b1-7473-42bd-b48d-00d555107b78\") " pod="calico-system/csi-node-driver-g96xv" Aug 13 09:03:31.505266 kubelet[2681]: E0813 09:03:31.505229 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.505266 kubelet[2681]: W0813 09:03:31.505261 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.505476 kubelet[2681]: E0813 09:03:31.505288 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.505476 kubelet[2681]: I0813 09:03:31.505324 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/acc500b1-7473-42bd-b48d-00d555107b78-socket-dir\") pod \"csi-node-driver-g96xv\" (UID: \"acc500b1-7473-42bd-b48d-00d555107b78\") " pod="calico-system/csi-node-driver-g96xv" Aug 13 09:03:31.506252 kubelet[2681]: E0813 09:03:31.505831 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.506252 kubelet[2681]: W0813 09:03:31.505853 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.506252 kubelet[2681]: E0813 09:03:31.505872 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.506252 kubelet[2681]: I0813 09:03:31.505900 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/acc500b1-7473-42bd-b48d-00d555107b78-kubelet-dir\") pod \"csi-node-driver-g96xv\" (UID: \"acc500b1-7473-42bd-b48d-00d555107b78\") " pod="calico-system/csi-node-driver-g96xv" Aug 13 09:03:31.506252 kubelet[2681]: E0813 09:03:31.506202 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.506252 kubelet[2681]: W0813 09:03:31.506220 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.506252 kubelet[2681]: E0813 09:03:31.506252 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.506681 kubelet[2681]: I0813 09:03:31.506279 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/acc500b1-7473-42bd-b48d-00d555107b78-varrun\") pod \"csi-node-driver-g96xv\" (UID: \"acc500b1-7473-42bd-b48d-00d555107b78\") " pod="calico-system/csi-node-driver-g96xv" Aug 13 09:03:31.508319 kubelet[2681]: E0813 09:03:31.507706 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.508319 kubelet[2681]: W0813 09:03:31.507729 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.508319 kubelet[2681]: E0813 09:03:31.508220 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.508319 kubelet[2681]: I0813 09:03:31.508259 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmld5\" (UniqueName: \"kubernetes.io/projected/acc500b1-7473-42bd-b48d-00d555107b78-kube-api-access-cmld5\") pod \"csi-node-driver-g96xv\" (UID: \"acc500b1-7473-42bd-b48d-00d555107b78\") " pod="calico-system/csi-node-driver-g96xv" Aug 13 09:03:31.509295 kubelet[2681]: E0813 09:03:31.508776 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.509295 kubelet[2681]: W0813 09:03:31.508797 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.509295 kubelet[2681]: E0813 09:03:31.508837 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.510265 kubelet[2681]: E0813 09:03:31.510242 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.510265 kubelet[2681]: W0813 09:03:31.510262 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.510478 kubelet[2681]: E0813 09:03:31.510299 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.510546 kubelet[2681]: E0813 09:03:31.510532 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.510612 kubelet[2681]: W0813 09:03:31.510545 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.511427 kubelet[2681]: E0813 09:03:31.511019 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.511427 kubelet[2681]: E0813 09:03:31.511253 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.511427 kubelet[2681]: W0813 09:03:31.511267 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.511427 kubelet[2681]: E0813 09:03:31.511358 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.513556 kubelet[2681]: E0813 09:03:31.513253 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.513556 kubelet[2681]: W0813 09:03:31.513273 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.513556 kubelet[2681]: E0813 09:03:31.513501 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.513556 kubelet[2681]: E0813 09:03:31.513540 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.513556 kubelet[2681]: W0813 09:03:31.513554 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.514234 kubelet[2681]: E0813 09:03:31.513570 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.515123 kubelet[2681]: E0813 09:03:31.514816 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.515123 kubelet[2681]: W0813 09:03:31.514838 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.515123 kubelet[2681]: E0813 09:03:31.514855 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.515636 kubelet[2681]: E0813 09:03:31.515183 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.515636 kubelet[2681]: W0813 09:03:31.515197 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.515636 kubelet[2681]: E0813 09:03:31.515213 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.516501 kubelet[2681]: E0813 09:03:31.516218 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.516501 kubelet[2681]: W0813 09:03:31.516239 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.516501 kubelet[2681]: E0813 09:03:31.516256 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.518413 kubelet[2681]: E0813 09:03:31.518236 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.518413 kubelet[2681]: W0813 09:03:31.518258 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.518413 kubelet[2681]: E0813 09:03:31.518276 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.530052 containerd[1509]: time="2025-08-13T09:03:31.527440848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:03:31.530052 containerd[1509]: time="2025-08-13T09:03:31.529350275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:03:31.530052 containerd[1509]: time="2025-08-13T09:03:31.529376232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:31.530052 containerd[1509]: time="2025-08-13T09:03:31.529530148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:03:31.593389 systemd[1]: Started cri-containerd-e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb.scope - libcontainer container e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb. Aug 13 09:03:31.609996 kubelet[2681]: E0813 09:03:31.609955 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.609996 kubelet[2681]: W0813 09:03:31.609985 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.610222 kubelet[2681]: E0813 09:03:31.610011 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.613491 kubelet[2681]: E0813 09:03:31.613463 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.613491 kubelet[2681]: W0813 09:03:31.613486 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.613635 kubelet[2681]: E0813 09:03:31.613504 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.614529 kubelet[2681]: E0813 09:03:31.614503 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.614529 kubelet[2681]: W0813 09:03:31.614527 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.614656 kubelet[2681]: E0813 09:03:31.614544 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.619847 kubelet[2681]: E0813 09:03:31.619749 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.621143 kubelet[2681]: W0813 09:03:31.621105 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.621229 kubelet[2681]: E0813 09:03:31.621148 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.621724 kubelet[2681]: E0813 09:03:31.621698 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.621724 kubelet[2681]: W0813 09:03:31.621720 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.621855 kubelet[2681]: E0813 09:03:31.621737 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.623675 kubelet[2681]: E0813 09:03:31.623650 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.623675 kubelet[2681]: W0813 09:03:31.623671 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.624004 kubelet[2681]: E0813 09:03:31.623840 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.624278 kubelet[2681]: E0813 09:03:31.624255 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.624278 kubelet[2681]: W0813 09:03:31.624277 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.624499 kubelet[2681]: E0813 09:03:31.624455 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.624987 kubelet[2681]: E0813 09:03:31.624959 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.625741 kubelet[2681]: W0813 09:03:31.625549 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.625741 kubelet[2681]: E0813 09:03:31.625584 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.626473 kubelet[2681]: E0813 09:03:31.626445 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.626473 kubelet[2681]: W0813 09:03:31.626468 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.626615 kubelet[2681]: E0813 09:03:31.626486 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.628018 kubelet[2681]: E0813 09:03:31.627984 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.628018 kubelet[2681]: W0813 09:03:31.628010 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.628186 kubelet[2681]: E0813 09:03:31.628030 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.630193 kubelet[2681]: E0813 09:03:31.630158 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.630193 kubelet[2681]: W0813 09:03:31.630184 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.630583 kubelet[2681]: E0813 09:03:31.630397 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.631104 kubelet[2681]: E0813 09:03:31.631057 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.631104 kubelet[2681]: W0813 09:03:31.631102 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.631327 kubelet[2681]: E0813 09:03:31.631208 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.632174 kubelet[2681]: E0813 09:03:31.632151 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.632174 kubelet[2681]: W0813 09:03:31.632171 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.632537 kubelet[2681]: E0813 09:03:31.632209 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.633188 kubelet[2681]: E0813 09:03:31.633165 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.633188 kubelet[2681]: W0813 09:03:31.633185 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.633512 kubelet[2681]: E0813 09:03:31.633276 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.634359 kubelet[2681]: E0813 09:03:31.634336 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.634359 kubelet[2681]: W0813 09:03:31.634356 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.634603 kubelet[2681]: E0813 09:03:31.634442 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.635137 kubelet[2681]: E0813 09:03:31.635115 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.635137 kubelet[2681]: W0813 09:03:31.635135 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.635348 kubelet[2681]: E0813 09:03:31.635219 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.635919 kubelet[2681]: E0813 09:03:31.635893 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.635919 kubelet[2681]: W0813 09:03:31.635913 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.636452 kubelet[2681]: E0813 09:03:31.635999 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.637411 kubelet[2681]: E0813 09:03:31.637387 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.637411 kubelet[2681]: W0813 09:03:31.637408 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.638165 kubelet[2681]: E0813 09:03:31.637493 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.638684 kubelet[2681]: E0813 09:03:31.638547 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.638684 kubelet[2681]: W0813 09:03:31.638568 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.638684 kubelet[2681]: E0813 09:03:31.638603 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.639476 kubelet[2681]: E0813 09:03:31.639453 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.639476 kubelet[2681]: W0813 09:03:31.639473 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.639689 kubelet[2681]: E0813 09:03:31.639556 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.641235 kubelet[2681]: E0813 09:03:31.641211 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.641235 kubelet[2681]: W0813 09:03:31.641233 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.642402 kubelet[2681]: E0813 09:03:31.642112 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.642923 kubelet[2681]: E0813 09:03:31.642900 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.642923 kubelet[2681]: W0813 09:03:31.642923 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.643145 kubelet[2681]: E0813 09:03:31.643039 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.644273 kubelet[2681]: E0813 09:03:31.644222 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.644273 kubelet[2681]: W0813 09:03:31.644272 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.644805 kubelet[2681]: E0813 09:03:31.644384 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.645249 kubelet[2681]: E0813 09:03:31.645225 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.645249 kubelet[2681]: W0813 09:03:31.645247 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.645464 kubelet[2681]: E0813 09:03:31.645352 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.645731 kubelet[2681]: E0813 09:03:31.645708 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.645731 kubelet[2681]: W0813 09:03:31.645729 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.645868 kubelet[2681]: E0813 09:03:31.645746 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.677395 kubelet[2681]: E0813 09:03:31.677251 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:31.677688 kubelet[2681]: W0813 09:03:31.677572 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:31.677688 kubelet[2681]: E0813 09:03:31.677609 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:31.742185 containerd[1509]: time="2025-08-13T09:03:31.741673996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-97bd97c76-497pb,Uid:a61a7cf5-4d9d-4b12-a44f-5acff18d6cd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b683b182cc35fd8f3a4e5217acd8d1391ece885ec0193282eb08516cbbc6870e\"" Aug 13 09:03:31.764686 containerd[1509]: time="2025-08-13T09:03:31.764489552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 09:03:31.787219 containerd[1509]: time="2025-08-13T09:03:31.786724566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4z2mc,Uid:77e25716-a0e7-4ee5-b89a-daad4d940a10,Namespace:calico-system,Attempt:0,} returns sandbox id \"e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb\"" Aug 13 09:03:33.238702 kubelet[2681]: E0813 09:03:33.238479 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:33.694677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1313163318.mount: Deactivated successfully. Aug 13 09:03:35.178757 containerd[1509]: time="2025-08-13T09:03:35.178676770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:35.180222 containerd[1509]: time="2025-08-13T09:03:35.179920898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 09:03:35.181003 containerd[1509]: time="2025-08-13T09:03:35.180930060Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:35.184550 containerd[1509]: time="2025-08-13T09:03:35.184463482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:35.188500 containerd[1509]: time="2025-08-13T09:03:35.188455099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.42389352s" Aug 13 09:03:35.188588 containerd[1509]: time="2025-08-13T09:03:35.188514244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 09:03:35.193947 containerd[1509]: time="2025-08-13T09:03:35.191659334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 09:03:35.214007 containerd[1509]: time="2025-08-13T09:03:35.213955256Z" level=info msg="CreateContainer within sandbox \"b683b182cc35fd8f3a4e5217acd8d1391ece885ec0193282eb08516cbbc6870e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 09:03:35.233757 kubelet[2681]: E0813 09:03:35.233578 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:35.248770 containerd[1509]: time="2025-08-13T09:03:35.247320599Z" level=info msg="CreateContainer within sandbox \"b683b182cc35fd8f3a4e5217acd8d1391ece885ec0193282eb08516cbbc6870e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c325b3041f4454bf5acd09c844586083c94f25cb412f2cf312448cccf8aafbae\"" Aug 13 09:03:35.248770 containerd[1509]: time="2025-08-13T09:03:35.247991665Z" level=info msg="StartContainer for \"c325b3041f4454bf5acd09c844586083c94f25cb412f2cf312448cccf8aafbae\"" Aug 13 09:03:35.338999 systemd[1]: Started cri-containerd-c325b3041f4454bf5acd09c844586083c94f25cb412f2cf312448cccf8aafbae.scope - libcontainer container c325b3041f4454bf5acd09c844586083c94f25cb412f2cf312448cccf8aafbae. Aug 13 09:03:35.418206 containerd[1509]: time="2025-08-13T09:03:35.418144673Z" level=info msg="StartContainer for \"c325b3041f4454bf5acd09c844586083c94f25cb412f2cf312448cccf8aafbae\" returns successfully" Aug 13 09:03:36.438276 kubelet[2681]: I0813 09:03:36.438166 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-97bd97c76-497pb" podStartSLOduration=3.002274753 podStartE2EDuration="6.438094155s" podCreationTimestamp="2025-08-13 09:03:30 +0000 UTC" firstStartedPulling="2025-08-13 09:03:31.75510128 +0000 UTC m=+24.745116677" lastFinishedPulling="2025-08-13 09:03:35.190920674 +0000 UTC m=+28.180936079" observedRunningTime="2025-08-13 09:03:36.437578644 +0000 UTC m=+29.427594053" watchObservedRunningTime="2025-08-13 09:03:36.438094155 +0000 UTC m=+29.428109560" Aug 13 09:03:36.444473 kubelet[2681]: E0813 09:03:36.444248 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.444473 kubelet[2681]: W0813 09:03:36.444295 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.444473 kubelet[2681]: E0813 09:03:36.444343 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.444977 kubelet[2681]: E0813 09:03:36.444758 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.444977 kubelet[2681]: W0813 09:03:36.444779 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.444977 kubelet[2681]: E0813 09:03:36.444796 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.445392 kubelet[2681]: E0813 09:03:36.445371 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.445770 kubelet[2681]: W0813 09:03:36.445499 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.445770 kubelet[2681]: E0813 09:03:36.445538 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.446117 kubelet[2681]: E0813 09:03:36.446096 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.446371 kubelet[2681]: W0813 09:03:36.446215 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.446371 kubelet[2681]: E0813 09:03:36.446240 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.446831 kubelet[2681]: E0813 09:03:36.446681 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.446831 kubelet[2681]: W0813 09:03:36.446700 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.446831 kubelet[2681]: E0813 09:03:36.446718 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.447380 kubelet[2681]: E0813 09:03:36.447187 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.447380 kubelet[2681]: W0813 09:03:36.447201 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.447380 kubelet[2681]: E0813 09:03:36.447218 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.447952 kubelet[2681]: E0813 09:03:36.447754 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.447952 kubelet[2681]: W0813 09:03:36.447773 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.447952 kubelet[2681]: E0813 09:03:36.447825 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.448377 kubelet[2681]: E0813 09:03:36.448216 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.448377 kubelet[2681]: W0813 09:03:36.448234 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.448377 kubelet[2681]: E0813 09:03:36.448251 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.448646 kubelet[2681]: E0813 09:03:36.448626 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.448746 kubelet[2681]: W0813 09:03:36.448725 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.448857 kubelet[2681]: E0813 09:03:36.448836 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.449392 kubelet[2681]: E0813 09:03:36.449239 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.449392 kubelet[2681]: W0813 09:03:36.449258 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.449392 kubelet[2681]: E0813 09:03:36.449274 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.450323 kubelet[2681]: E0813 09:03:36.449619 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.450323 kubelet[2681]: W0813 09:03:36.449634 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.450323 kubelet[2681]: E0813 09:03:36.449649 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.450323 kubelet[2681]: E0813 09:03:36.449989 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.450323 kubelet[2681]: W0813 09:03:36.450004 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.450323 kubelet[2681]: E0813 09:03:36.450019 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.451164 kubelet[2681]: E0813 09:03:36.450330 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.451164 kubelet[2681]: W0813 09:03:36.450345 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.451164 kubelet[2681]: E0813 09:03:36.450360 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.451164 kubelet[2681]: E0813 09:03:36.450657 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.451164 kubelet[2681]: W0813 09:03:36.450673 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.451164 kubelet[2681]: E0813 09:03:36.450688 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.451164 kubelet[2681]: E0813 09:03:36.450988 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.451164 kubelet[2681]: W0813 09:03:36.451002 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.451164 kubelet[2681]: E0813 09:03:36.451017 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.458653 kubelet[2681]: E0813 09:03:36.458617 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.458870 kubelet[2681]: W0813 09:03:36.458747 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.458870 kubelet[2681]: E0813 09:03:36.458773 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.459552 kubelet[2681]: E0813 09:03:36.459383 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.459552 kubelet[2681]: W0813 09:03:36.459402 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.459552 kubelet[2681]: E0813 09:03:36.459441 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.460062 kubelet[2681]: E0813 09:03:36.460038 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.460170 kubelet[2681]: W0813 09:03:36.460062 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.460170 kubelet[2681]: E0813 09:03:36.460116 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.460473 kubelet[2681]: E0813 09:03:36.460453 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.460473 kubelet[2681]: W0813 09:03:36.460473 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.460704 kubelet[2681]: E0813 09:03:36.460560 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.460976 kubelet[2681]: E0813 09:03:36.460956 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.460976 kubelet[2681]: W0813 09:03:36.460975 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.461358 kubelet[2681]: E0813 09:03:36.461056 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.461358 kubelet[2681]: E0813 09:03:36.461258 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.461358 kubelet[2681]: W0813 09:03:36.461272 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.461572 kubelet[2681]: E0813 09:03:36.461399 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.461865 kubelet[2681]: E0813 09:03:36.461842 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.461953 kubelet[2681]: W0813 09:03:36.461865 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.461953 kubelet[2681]: E0813 09:03:36.461903 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.462238 kubelet[2681]: E0813 09:03:36.462219 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.462299 kubelet[2681]: W0813 09:03:36.462239 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.462299 kubelet[2681]: E0813 09:03:36.462263 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.462632 kubelet[2681]: E0813 09:03:36.462612 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.462632 kubelet[2681]: W0813 09:03:36.462632 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.463205 kubelet[2681]: E0813 09:03:36.462711 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.463205 kubelet[2681]: E0813 09:03:36.462938 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.463205 kubelet[2681]: W0813 09:03:36.462951 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.463338 kubelet[2681]: E0813 09:03:36.463223 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.463338 kubelet[2681]: W0813 09:03:36.463237 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.463338 kubelet[2681]: E0813 09:03:36.463252 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.463720 kubelet[2681]: E0813 09:03:36.463695 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.463720 kubelet[2681]: W0813 09:03:36.463716 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.463832 kubelet[2681]: E0813 09:03:36.463736 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.464680 kubelet[2681]: E0813 09:03:36.464113 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.464680 kubelet[2681]: W0813 09:03:36.464136 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.464680 kubelet[2681]: E0813 09:03:36.464153 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.464864 kubelet[2681]: E0813 09:03:36.464742 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.464864 kubelet[2681]: W0813 09:03:36.464787 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.464864 kubelet[2681]: E0813 09:03:36.464807 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.465044 kubelet[2681]: E0813 09:03:36.464900 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.465292 kubelet[2681]: E0813 09:03:36.465272 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.465292 kubelet[2681]: W0813 09:03:36.465292 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.465616 kubelet[2681]: E0813 09:03:36.465316 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.465666 kubelet[2681]: E0813 09:03:36.465624 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.465666 kubelet[2681]: W0813 09:03:36.465651 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.465768 kubelet[2681]: E0813 09:03:36.465667 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.466453 kubelet[2681]: E0813 09:03:36.466282 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.466453 kubelet[2681]: W0813 09:03:36.466303 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.466453 kubelet[2681]: E0813 09:03:36.466328 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:36.466912 kubelet[2681]: E0813 09:03:36.466831 2681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 09:03:36.466912 kubelet[2681]: W0813 09:03:36.466850 2681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 09:03:36.466912 kubelet[2681]: E0813 09:03:36.466868 2681 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 09:03:37.123126 containerd[1509]: time="2025-08-13T09:03:37.121979785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:37.125048 containerd[1509]: time="2025-08-13T09:03:37.124974225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 09:03:37.125404 containerd[1509]: time="2025-08-13T09:03:37.125346198Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:37.132789 containerd[1509]: time="2025-08-13T09:03:37.132528984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:37.134123 containerd[1509]: time="2025-08-13T09:03:37.133696780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.941973531s" Aug 13 09:03:37.134123 containerd[1509]: time="2025-08-13T09:03:37.133754977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 09:03:37.138341 containerd[1509]: time="2025-08-13T09:03:37.138278247Z" level=info msg="CreateContainer within sandbox \"e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 09:03:37.168393 containerd[1509]: time="2025-08-13T09:03:37.168343493Z" level=info msg="CreateContainer within sandbox \"e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2\"" Aug 13 09:03:37.169848 containerd[1509]: time="2025-08-13T09:03:37.169793559Z" level=info msg="StartContainer for \"e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2\"" Aug 13 09:03:37.227628 kubelet[2681]: E0813 09:03:37.227257 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:37.239371 systemd[1]: Started cri-containerd-e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2.scope - libcontainer container e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2. Aug 13 09:03:37.299119 containerd[1509]: time="2025-08-13T09:03:37.298492138Z" level=info msg="StartContainer for \"e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2\" returns successfully" Aug 13 09:03:37.317023 systemd[1]: cri-containerd-e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2.scope: Deactivated successfully. Aug 13 09:03:37.355483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2-rootfs.mount: Deactivated successfully. Aug 13 09:03:37.399627 containerd[1509]: time="2025-08-13T09:03:37.374220866Z" level=info msg="shim disconnected" id=e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2 namespace=k8s.io Aug 13 09:03:37.399627 containerd[1509]: time="2025-08-13T09:03:37.399243906Z" level=warning msg="cleaning up after shim disconnected" id=e87499915ac0efec6c2f3136843d46e1f1dac41ba7e31af1bfc393f6dac2caa2 namespace=k8s.io Aug 13 09:03:37.399627 containerd[1509]: time="2025-08-13T09:03:37.399274119Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 09:03:37.406690 kubelet[2681]: I0813 09:03:37.406650 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 09:03:38.415927 containerd[1509]: time="2025-08-13T09:03:38.415371965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 09:03:39.227295 kubelet[2681]: E0813 09:03:39.226354 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:41.227760 kubelet[2681]: E0813 09:03:41.226384 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:43.075487 systemd[1]: Started sshd@9-10.230.18.154:22-121.127.231.238:54822.service - OpenSSH per-connection server daemon (121.127.231.238:54822). Aug 13 09:03:43.230594 kubelet[2681]: E0813 09:03:43.230526 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:43.485578 containerd[1509]: time="2025-08-13T09:03:43.485487222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:43.486878 containerd[1509]: time="2025-08-13T09:03:43.486811755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 09:03:43.487846 containerd[1509]: time="2025-08-13T09:03:43.487747689Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:43.496261 containerd[1509]: time="2025-08-13T09:03:43.496192791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:43.497886 containerd[1509]: time="2025-08-13T09:03:43.497702015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 5.081998682s" Aug 13 09:03:43.497886 containerd[1509]: time="2025-08-13T09:03:43.497759146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 09:03:43.502694 containerd[1509]: time="2025-08-13T09:03:43.502207011Z" level=info msg="CreateContainer within sandbox \"e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 09:03:43.525582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2017032516.mount: Deactivated successfully. Aug 13 09:03:43.541565 containerd[1509]: time="2025-08-13T09:03:43.541354825Z" level=info msg="CreateContainer within sandbox \"e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9\"" Aug 13 09:03:43.544268 containerd[1509]: time="2025-08-13T09:03:43.542421192Z" level=info msg="StartContainer for \"c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9\"" Aug 13 09:03:43.612401 systemd[1]: Started cri-containerd-c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9.scope - libcontainer container c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9. Aug 13 09:03:43.750504 containerd[1509]: time="2025-08-13T09:03:43.750169696Z" level=info msg="StartContainer for \"c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9\" returns successfully" Aug 13 09:03:44.644990 sshd[3440]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:03:44.939851 systemd[1]: cri-containerd-c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9.scope: Deactivated successfully. Aug 13 09:03:44.995356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9-rootfs.mount: Deactivated successfully. Aug 13 09:03:45.011808 containerd[1509]: time="2025-08-13T09:03:45.011530481Z" level=info msg="shim disconnected" id=c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9 namespace=k8s.io Aug 13 09:03:45.011808 containerd[1509]: time="2025-08-13T09:03:45.011747442Z" level=warning msg="cleaning up after shim disconnected" id=c563d2e7ad0b30a4be8d2189c408404827556929c307ff8b8fc597e1cee9b7f9 namespace=k8s.io Aug 13 09:03:45.011808 containerd[1509]: time="2025-08-13T09:03:45.011775796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 09:03:45.041863 kubelet[2681]: I0813 09:03:45.041354 2681 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 09:03:45.106669 systemd[1]: Created slice kubepods-burstable-pod7136f5b0_24ac_4259_bcbd_579709672a99.slice - libcontainer container kubepods-burstable-pod7136f5b0_24ac_4259_bcbd_579709672a99.slice. Aug 13 09:03:45.120133 systemd[1]: Created slice kubepods-burstable-pod58f120ab_dea2_40e9_9eb7_22d71f6b8425.slice - libcontainer container kubepods-burstable-pod58f120ab_dea2_40e9_9eb7_22d71f6b8425.slice. Aug 13 09:03:45.133708 kubelet[2681]: W0813 09:03:45.132815 2681 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-cz57v.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-cz57v.gb1.brightbox.com' and this object Aug 13 09:03:45.135202 kubelet[2681]: E0813 09:03:45.135116 2681 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-cz57v.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-cz57v.gb1.brightbox.com' and this object" logger="UnhandledError" Aug 13 09:03:45.135202 kubelet[2681]: W0813 09:03:45.135184 2681 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-cz57v.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-cz57v.gb1.brightbox.com' and this object Aug 13 09:03:45.135372 kubelet[2681]: E0813 09:03:45.135216 2681 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-cz57v.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-cz57v.gb1.brightbox.com' and this object" logger="UnhandledError" Aug 13 09:03:45.147206 systemd[1]: Created slice kubepods-besteffort-podc09c3a84_7d45_4d14_9d0c_95a02288ec6b.slice - libcontainer container kubepods-besteffort-podc09c3a84_7d45_4d14_9d0c_95a02288ec6b.slice. Aug 13 09:03:45.153130 kubelet[2681]: W0813 09:03:45.152256 2681 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:srv-cz57v.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-cz57v.gb1.brightbox.com' and this object Aug 13 09:03:45.153130 kubelet[2681]: E0813 09:03:45.152307 2681 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:srv-cz57v.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-cz57v.gb1.brightbox.com' and this object" logger="UnhandledError" Aug 13 09:03:45.162950 systemd[1]: Created slice kubepods-besteffort-pod4fef4dc0_3721_4011_811d_4e894024c3f2.slice - libcontainer container kubepods-besteffort-pod4fef4dc0_3721_4011_811d_4e894024c3f2.slice. Aug 13 09:03:45.179856 systemd[1]: Created slice kubepods-besteffort-pod038dba4d_abe8_4783_8541_5e36b5853cd7.slice - libcontainer container kubepods-besteffort-pod038dba4d_abe8_4783_8541_5e36b5853cd7.slice. Aug 13 09:03:45.193678 systemd[1]: Created slice kubepods-besteffort-pod5548a3fa_5234_4c05_aa6f_4b5c715b74b3.slice - libcontainer container kubepods-besteffort-pod5548a3fa_5234_4c05_aa6f_4b5c715b74b3.slice. Aug 13 09:03:45.207567 systemd[1]: Created slice kubepods-besteffort-pod16bc31e7_bfd7_43e8_b302_ea6efb7b3ff4.slice - libcontainer container kubepods-besteffort-pod16bc31e7_bfd7_43e8_b302_ea6efb7b3ff4.slice. Aug 13 09:03:45.223892 kubelet[2681]: I0813 09:03:45.223835 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-572pd\" (UniqueName: \"kubernetes.io/projected/7136f5b0-24ac-4259-bcbd-579709672a99-kube-api-access-572pd\") pod \"coredns-668d6bf9bc-54ftd\" (UID: \"7136f5b0-24ac-4259-bcbd-579709672a99\") " pod="kube-system/coredns-668d6bf9bc-54ftd" Aug 13 09:03:45.224110 kubelet[2681]: I0813 09:03:45.223915 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/038dba4d-abe8-4783-8541-5e36b5853cd7-config\") pod \"goldmane-768f4c5c69-v76x2\" (UID: \"038dba4d-abe8-4783-8541-5e36b5853cd7\") " pod="calico-system/goldmane-768f4c5c69-v76x2" Aug 13 09:03:45.224110 kubelet[2681]: I0813 09:03:45.223958 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgf4b\" (UniqueName: \"kubernetes.io/projected/4fef4dc0-3721-4011-811d-4e894024c3f2-kube-api-access-bgf4b\") pod \"calico-kube-controllers-588668cdb7-gwwpt\" (UID: \"4fef4dc0-3721-4011-811d-4e894024c3f2\") " pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" Aug 13 09:03:45.224110 kubelet[2681]: I0813 09:03:45.223992 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7136f5b0-24ac-4259-bcbd-579709672a99-config-volume\") pod \"coredns-668d6bf9bc-54ftd\" (UID: \"7136f5b0-24ac-4259-bcbd-579709672a99\") " pod="kube-system/coredns-668d6bf9bc-54ftd" Aug 13 09:03:45.224110 kubelet[2681]: I0813 09:03:45.224039 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/038dba4d-abe8-4783-8541-5e36b5853cd7-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-v76x2\" (UID: \"038dba4d-abe8-4783-8541-5e36b5853cd7\") " pod="calico-system/goldmane-768f4c5c69-v76x2" Aug 13 09:03:45.224110 kubelet[2681]: I0813 09:03:45.224092 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58f120ab-dea2-40e9-9eb7-22d71f6b8425-config-volume\") pod \"coredns-668d6bf9bc-wpdgx\" (UID: \"58f120ab-dea2-40e9-9eb7-22d71f6b8425\") " pod="kube-system/coredns-668d6bf9bc-wpdgx" Aug 13 09:03:45.224404 kubelet[2681]: I0813 09:03:45.224123 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rd5b\" (UniqueName: \"kubernetes.io/projected/58f120ab-dea2-40e9-9eb7-22d71f6b8425-kube-api-access-5rd5b\") pod \"coredns-668d6bf9bc-wpdgx\" (UID: \"58f120ab-dea2-40e9-9eb7-22d71f6b8425\") " pod="kube-system/coredns-668d6bf9bc-wpdgx" Aug 13 09:03:45.224404 kubelet[2681]: I0813 09:03:45.224156 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c09c3a84-7d45-4d14-9d0c-95a02288ec6b-calico-apiserver-certs\") pod \"calico-apiserver-5c795f975d-rzm28\" (UID: \"c09c3a84-7d45-4d14-9d0c-95a02288ec6b\") " pod="calico-apiserver/calico-apiserver-5c795f975d-rzm28" Aug 13 09:03:45.224404 kubelet[2681]: I0813 09:03:45.224184 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf7dx\" (UniqueName: \"kubernetes.io/projected/c09c3a84-7d45-4d14-9d0c-95a02288ec6b-kube-api-access-gf7dx\") pod \"calico-apiserver-5c795f975d-rzm28\" (UID: \"c09c3a84-7d45-4d14-9d0c-95a02288ec6b\") " pod="calico-apiserver/calico-apiserver-5c795f975d-rzm28" Aug 13 09:03:45.224404 kubelet[2681]: I0813 09:03:45.224210 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fef4dc0-3721-4011-811d-4e894024c3f2-tigera-ca-bundle\") pod \"calico-kube-controllers-588668cdb7-gwwpt\" (UID: \"4fef4dc0-3721-4011-811d-4e894024c3f2\") " pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" Aug 13 09:03:45.224404 kubelet[2681]: I0813 09:03:45.224235 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/038dba4d-abe8-4783-8541-5e36b5853cd7-goldmane-key-pair\") pod \"goldmane-768f4c5c69-v76x2\" (UID: \"038dba4d-abe8-4783-8541-5e36b5853cd7\") " pod="calico-system/goldmane-768f4c5c69-v76x2" Aug 13 09:03:45.224662 kubelet[2681]: I0813 09:03:45.224323 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4h7p\" (UniqueName: \"kubernetes.io/projected/038dba4d-abe8-4783-8541-5e36b5853cd7-kube-api-access-x4h7p\") pod \"goldmane-768f4c5c69-v76x2\" (UID: \"038dba4d-abe8-4783-8541-5e36b5853cd7\") " pod="calico-system/goldmane-768f4c5c69-v76x2" Aug 13 09:03:45.224662 kubelet[2681]: I0813 09:03:45.224367 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4f5z\" (UniqueName: \"kubernetes.io/projected/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-kube-api-access-w4f5z\") pod \"whisker-6c4cd9987c-7l522\" (UID: \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\") " pod="calico-system/whisker-6c4cd9987c-7l522" Aug 13 09:03:45.224662 kubelet[2681]: I0813 09:03:45.224418 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-backend-key-pair\") pod \"whisker-6c4cd9987c-7l522\" (UID: \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\") " pod="calico-system/whisker-6c4cd9987c-7l522" Aug 13 09:03:45.224662 kubelet[2681]: I0813 09:03:45.224449 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-ca-bundle\") pod \"whisker-6c4cd9987c-7l522\" (UID: \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\") " pod="calico-system/whisker-6c4cd9987c-7l522" Aug 13 09:03:45.245259 systemd[1]: Created slice kubepods-besteffort-podacc500b1_7473_42bd_b48d_00d555107b78.slice - libcontainer container kubepods-besteffort-podacc500b1_7473_42bd_b48d_00d555107b78.slice. Aug 13 09:03:45.258836 containerd[1509]: time="2025-08-13T09:03:45.258715663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g96xv,Uid:acc500b1-7473-42bd-b48d-00d555107b78,Namespace:calico-system,Attempt:0,}" Aug 13 09:03:45.325291 kubelet[2681]: I0813 09:03:45.325242 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4-calico-apiserver-certs\") pod \"calico-apiserver-5c795f975d-z5l2b\" (UID: \"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4\") " pod="calico-apiserver/calico-apiserver-5c795f975d-z5l2b" Aug 13 09:03:45.326655 kubelet[2681]: I0813 09:03:45.326353 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq65v\" (UniqueName: \"kubernetes.io/projected/16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4-kube-api-access-bq65v\") pod \"calico-apiserver-5c795f975d-z5l2b\" (UID: \"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4\") " pod="calico-apiserver/calico-apiserver-5c795f975d-z5l2b" Aug 13 09:03:45.416138 containerd[1509]: time="2025-08-13T09:03:45.415785288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54ftd,Uid:7136f5b0-24ac-4259-bcbd-579709672a99,Namespace:kube-system,Attempt:0,}" Aug 13 09:03:45.441360 containerd[1509]: time="2025-08-13T09:03:45.441310565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpdgx,Uid:58f120ab-dea2-40e9-9eb7-22d71f6b8425,Namespace:kube-system,Attempt:0,}" Aug 13 09:03:45.453424 containerd[1509]: time="2025-08-13T09:03:45.452307091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 09:03:45.475725 containerd[1509]: time="2025-08-13T09:03:45.475472517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588668cdb7-gwwpt,Uid:4fef4dc0-3721-4011-811d-4e894024c3f2,Namespace:calico-system,Attempt:0,}" Aug 13 09:03:45.490579 containerd[1509]: time="2025-08-13T09:03:45.490473628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v76x2,Uid:038dba4d-abe8-4783-8541-5e36b5853cd7,Namespace:calico-system,Attempt:0,}" Aug 13 09:03:45.716733 containerd[1509]: time="2025-08-13T09:03:45.716223989Z" level=error msg="Failed to destroy network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.718463 containerd[1509]: time="2025-08-13T09:03:45.718206950Z" level=error msg="Failed to destroy network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.727090 containerd[1509]: time="2025-08-13T09:03:45.727022299Z" level=error msg="encountered an error cleaning up failed sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.728424 containerd[1509]: time="2025-08-13T09:03:45.728359413Z" level=error msg="encountered an error cleaning up failed sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.728504 containerd[1509]: time="2025-08-13T09:03:45.728457117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54ftd,Uid:7136f5b0-24ac-4259-bcbd-579709672a99,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.728960 containerd[1509]: time="2025-08-13T09:03:45.728579086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpdgx,Uid:58f120ab-dea2-40e9-9eb7-22d71f6b8425,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.735649 containerd[1509]: time="2025-08-13T09:03:45.735607505Z" level=error msg="Failed to destroy network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.738523 containerd[1509]: time="2025-08-13T09:03:45.738473510Z" level=error msg="encountered an error cleaning up failed sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.738693 containerd[1509]: time="2025-08-13T09:03:45.738657024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g96xv,Uid:acc500b1-7473-42bd-b48d-00d555107b78,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.740902 kubelet[2681]: E0813 09:03:45.740063 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.740902 kubelet[2681]: E0813 09:03:45.740223 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wpdgx" Aug 13 09:03:45.740902 kubelet[2681]: E0813 09:03:45.740225 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.740902 kubelet[2681]: E0813 09:03:45.740271 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wpdgx" Aug 13 09:03:45.741176 kubelet[2681]: E0813 09:03:45.740304 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54ftd" Aug 13 09:03:45.741176 kubelet[2681]: E0813 09:03:45.740360 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wpdgx_kube-system(58f120ab-dea2-40e9-9eb7-22d71f6b8425)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wpdgx_kube-system(58f120ab-dea2-40e9-9eb7-22d71f6b8425)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wpdgx" podUID="58f120ab-dea2-40e9-9eb7-22d71f6b8425" Aug 13 09:03:45.741176 kubelet[2681]: E0813 09:03:45.740335 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-54ftd" Aug 13 09:03:45.741481 kubelet[2681]: E0813 09:03:45.740724 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-54ftd_kube-system(7136f5b0-24ac-4259-bcbd-579709672a99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-54ftd_kube-system(7136f5b0-24ac-4259-bcbd-579709672a99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-54ftd" podUID="7136f5b0-24ac-4259-bcbd-579709672a99" Aug 13 09:03:45.743575 kubelet[2681]: E0813 09:03:45.740840 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.743675 kubelet[2681]: E0813 09:03:45.743625 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g96xv" Aug 13 09:03:45.743675 kubelet[2681]: E0813 09:03:45.743660 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g96xv" Aug 13 09:03:45.743847 kubelet[2681]: E0813 09:03:45.743701 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g96xv_calico-system(acc500b1-7473-42bd-b48d-00d555107b78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g96xv_calico-system(acc500b1-7473-42bd-b48d-00d555107b78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:45.758189 containerd[1509]: time="2025-08-13T09:03:45.757347684Z" level=error msg="Failed to destroy network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.758189 containerd[1509]: time="2025-08-13T09:03:45.757991162Z" level=error msg="encountered an error cleaning up failed sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.758189 containerd[1509]: time="2025-08-13T09:03:45.758050514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588668cdb7-gwwpt,Uid:4fef4dc0-3721-4011-811d-4e894024c3f2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.759472 kubelet[2681]: E0813 09:03:45.758980 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.759472 kubelet[2681]: E0813 09:03:45.759051 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" Aug 13 09:03:45.759472 kubelet[2681]: E0813 09:03:45.759101 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" Aug 13 09:03:45.760249 kubelet[2681]: E0813 09:03:45.759443 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-588668cdb7-gwwpt_calico-system(4fef4dc0-3721-4011-811d-4e894024c3f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-588668cdb7-gwwpt_calico-system(4fef4dc0-3721-4011-811d-4e894024c3f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" podUID="4fef4dc0-3721-4011-811d-4e894024c3f2" Aug 13 09:03:45.765954 containerd[1509]: time="2025-08-13T09:03:45.765910657Z" level=error msg="Failed to destroy network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.766785 containerd[1509]: time="2025-08-13T09:03:45.766599531Z" level=error msg="encountered an error cleaning up failed sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.766785 containerd[1509]: time="2025-08-13T09:03:45.766678475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v76x2,Uid:038dba4d-abe8-4783-8541-5e36b5853cd7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.768109 kubelet[2681]: E0813 09:03:45.767194 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:45.768109 kubelet[2681]: E0813 09:03:45.767252 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-v76x2" Aug 13 09:03:45.768109 kubelet[2681]: E0813 09:03:45.767291 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-v76x2" Aug 13 09:03:45.768320 kubelet[2681]: E0813 09:03:45.767364 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-v76x2_calico-system(038dba4d-abe8-4783-8541-5e36b5853cd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-v76x2_calico-system(038dba4d-abe8-4783-8541-5e36b5853cd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-v76x2" podUID="038dba4d-abe8-4783-8541-5e36b5853cd7" Aug 13 09:03:46.006471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d-shm.mount: Deactivated successfully. Aug 13 09:03:46.332431 kubelet[2681]: E0813 09:03:46.332210 2681 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Aug 13 09:03:46.333116 kubelet[2681]: E0813 09:03:46.332432 2681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-ca-bundle podName:5548a3fa-5234-4c05-aa6f-4b5c715b74b3 nodeName:}" failed. No retries permitted until 2025-08-13 09:03:46.832381855 +0000 UTC m=+39.822397258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-ca-bundle") pod "whisker-6c4cd9987c-7l522" (UID: "5548a3fa-5234-4c05-aa6f-4b5c715b74b3") : failed to sync configmap cache: timed out waiting for the condition Aug 13 09:03:46.368693 sshd[3401]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:03:46.370446 kubelet[2681]: E0813 09:03:46.370394 2681 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Aug 13 09:03:46.370553 kubelet[2681]: E0813 09:03:46.370464 2681 projected.go:194] Error preparing data for projected volume kube-api-access-gf7dx for pod calico-apiserver/calico-apiserver-5c795f975d-rzm28: failed to sync configmap cache: timed out waiting for the condition Aug 13 09:03:46.370641 kubelet[2681]: E0813 09:03:46.370572 2681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c09c3a84-7d45-4d14-9d0c-95a02288ec6b-kube-api-access-gf7dx podName:c09c3a84-7d45-4d14-9d0c-95a02288ec6b nodeName:}" failed. No retries permitted until 2025-08-13 09:03:46.870545679 +0000 UTC m=+39.860561076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gf7dx" (UniqueName: "kubernetes.io/projected/c09c3a84-7d45-4d14-9d0c-95a02288ec6b-kube-api-access-gf7dx") pod "calico-apiserver-5c795f975d-rzm28" (UID: "c09c3a84-7d45-4d14-9d0c-95a02288ec6b") : failed to sync configmap cache: timed out waiting for the condition Aug 13 09:03:46.413713 containerd[1509]: time="2025-08-13T09:03:46.413334121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-z5l2b,Uid:16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4,Namespace:calico-apiserver,Attempt:0,}" Aug 13 09:03:46.454419 kubelet[2681]: I0813 09:03:46.453727 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:03:46.460015 kubelet[2681]: I0813 09:03:46.459007 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:03:46.481267 kubelet[2681]: I0813 09:03:46.477278 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:03:46.481267 kubelet[2681]: I0813 09:03:46.479473 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:03:46.481267 kubelet[2681]: I0813 09:03:46.480974 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:03:46.496078 containerd[1509]: time="2025-08-13T09:03:46.495993045Z" level=info msg="StopPodSandbox for \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\"" Aug 13 09:03:46.498218 containerd[1509]: time="2025-08-13T09:03:46.497358596Z" level=info msg="StopPodSandbox for \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\"" Aug 13 09:03:46.499205 containerd[1509]: time="2025-08-13T09:03:46.498560004Z" level=info msg="Ensure that sandbox 6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244 in task-service has been cleanup successfully" Aug 13 09:03:46.499205 containerd[1509]: time="2025-08-13T09:03:46.498911676Z" level=info msg="Ensure that sandbox f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267 in task-service has been cleanup successfully" Aug 13 09:03:46.501784 containerd[1509]: time="2025-08-13T09:03:46.501271565Z" level=info msg="StopPodSandbox for \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\"" Aug 13 09:03:46.501784 containerd[1509]: time="2025-08-13T09:03:46.501517070Z" level=info msg="Ensure that sandbox 76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12 in task-service has been cleanup successfully" Aug 13 09:03:46.502169 containerd[1509]: time="2025-08-13T09:03:46.502127608Z" level=info msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\"" Aug 13 09:03:46.502364 containerd[1509]: time="2025-08-13T09:03:46.502328660Z" level=info msg="Ensure that sandbox b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6 in task-service has been cleanup successfully" Aug 13 09:03:46.505059 containerd[1509]: time="2025-08-13T09:03:46.505009191Z" level=info msg="StopPodSandbox for \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\"" Aug 13 09:03:46.507100 containerd[1509]: time="2025-08-13T09:03:46.506030818Z" level=info msg="Ensure that sandbox c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d in task-service has been cleanup successfully" Aug 13 09:03:46.624411 containerd[1509]: time="2025-08-13T09:03:46.624207731Z" level=error msg="Failed to destroy network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.625199 containerd[1509]: time="2025-08-13T09:03:46.624871333Z" level=error msg="encountered an error cleaning up failed sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.625199 containerd[1509]: time="2025-08-13T09:03:46.624944064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-z5l2b,Uid:16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.627452 kubelet[2681]: E0813 09:03:46.627382 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.627611 kubelet[2681]: E0813 09:03:46.627495 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c795f975d-z5l2b" Aug 13 09:03:46.627611 kubelet[2681]: E0813 09:03:46.627539 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c795f975d-z5l2b" Aug 13 09:03:46.627884 kubelet[2681]: E0813 09:03:46.627627 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c795f975d-z5l2b_calico-apiserver(16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c795f975d-z5l2b_calico-apiserver(16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c795f975d-z5l2b" podUID="16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4" Aug 13 09:03:46.643920 containerd[1509]: time="2025-08-13T09:03:46.643815384Z" level=error msg="StopPodSandbox for \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\" failed" error="failed to destroy network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.644442 kubelet[2681]: E0813 09:03:46.644174 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:03:46.644442 kubelet[2681]: E0813 09:03:46.644274 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244"} Aug 13 09:03:46.644442 kubelet[2681]: E0813 09:03:46.644391 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"038dba4d-abe8-4783-8541-5e36b5853cd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:46.644442 kubelet[2681]: E0813 09:03:46.644427 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"038dba4d-abe8-4783-8541-5e36b5853cd7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-v76x2" podUID="038dba4d-abe8-4783-8541-5e36b5853cd7" Aug 13 09:03:46.656305 containerd[1509]: time="2025-08-13T09:03:46.656246290Z" level=error msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" failed" error="failed to destroy network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.657299 kubelet[2681]: E0813 09:03:46.657236 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:03:46.657572 kubelet[2681]: E0813 09:03:46.657310 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6"} Aug 13 09:03:46.657572 kubelet[2681]: E0813 09:03:46.657357 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fef4dc0-3721-4011-811d-4e894024c3f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:46.657572 kubelet[2681]: E0813 09:03:46.657387 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fef4dc0-3721-4011-811d-4e894024c3f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" podUID="4fef4dc0-3721-4011-811d-4e894024c3f2" Aug 13 09:03:46.660397 containerd[1509]: time="2025-08-13T09:03:46.660310192Z" level=error msg="StopPodSandbox for \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\" failed" error="failed to destroy network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.663012 kubelet[2681]: E0813 09:03:46.662819 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:03:46.663012 kubelet[2681]: E0813 09:03:46.662888 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12"} Aug 13 09:03:46.663012 kubelet[2681]: E0813 09:03:46.662926 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7136f5b0-24ac-4259-bcbd-579709672a99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:46.663012 kubelet[2681]: E0813 09:03:46.662957 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7136f5b0-24ac-4259-bcbd-579709672a99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-54ftd" podUID="7136f5b0-24ac-4259-bcbd-579709672a99" Aug 13 09:03:46.666191 containerd[1509]: time="2025-08-13T09:03:46.665990576Z" level=error msg="StopPodSandbox for \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\" failed" error="failed to destroy network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.666400 kubelet[2681]: E0813 09:03:46.666181 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:03:46.666400 kubelet[2681]: E0813 09:03:46.666230 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d"} Aug 13 09:03:46.666400 kubelet[2681]: E0813 09:03:46.666270 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acc500b1-7473-42bd-b48d-00d555107b78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:46.666400 kubelet[2681]: E0813 09:03:46.666301 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acc500b1-7473-42bd-b48d-00d555107b78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g96xv" podUID="acc500b1-7473-42bd-b48d-00d555107b78" Aug 13 09:03:46.673031 containerd[1509]: time="2025-08-13T09:03:46.672989151Z" level=error msg="StopPodSandbox for \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\" failed" error="failed to destroy network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:46.673326 kubelet[2681]: E0813 09:03:46.673261 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:03:46.673470 kubelet[2681]: E0813 09:03:46.673336 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267"} Aug 13 09:03:46.673470 kubelet[2681]: E0813 09:03:46.673373 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58f120ab-dea2-40e9-9eb7-22d71f6b8425\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:46.673470 kubelet[2681]: E0813 09:03:46.673401 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58f120ab-dea2-40e9-9eb7-22d71f6b8425\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wpdgx" podUID="58f120ab-dea2-40e9-9eb7-22d71f6b8425" Aug 13 09:03:46.787654 sshd[3700]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:03:46.961620 containerd[1509]: time="2025-08-13T09:03:46.960752801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-rzm28,Uid:c09c3a84-7d45-4d14-9d0c-95a02288ec6b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 09:03:47.003118 containerd[1509]: time="2025-08-13T09:03:47.002507924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4cd9987c-7l522,Uid:5548a3fa-5234-4c05-aa6f-4b5c715b74b3,Namespace:calico-system,Attempt:0,}" Aug 13 09:03:47.005787 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256-shm.mount: Deactivated successfully. Aug 13 09:03:47.100972 containerd[1509]: time="2025-08-13T09:03:47.100855077Z" level=error msg="Failed to destroy network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.101776 containerd[1509]: time="2025-08-13T09:03:47.101381801Z" level=error msg="encountered an error cleaning up failed sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.101776 containerd[1509]: time="2025-08-13T09:03:47.101445286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-rzm28,Uid:c09c3a84-7d45-4d14-9d0c-95a02288ec6b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.108125 kubelet[2681]: E0813 09:03:47.104302 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.108125 kubelet[2681]: E0813 09:03:47.104388 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c795f975d-rzm28" Aug 13 09:03:47.108125 kubelet[2681]: E0813 09:03:47.104427 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c795f975d-rzm28" Aug 13 09:03:47.112051 kubelet[2681]: E0813 09:03:47.104516 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c795f975d-rzm28_calico-apiserver(c09c3a84-7d45-4d14-9d0c-95a02288ec6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c795f975d-rzm28_calico-apiserver(c09c3a84-7d45-4d14-9d0c-95a02288ec6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c795f975d-rzm28" podUID="c09c3a84-7d45-4d14-9d0c-95a02288ec6b" Aug 13 09:03:47.110544 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef-shm.mount: Deactivated successfully. Aug 13 09:03:47.119673 containerd[1509]: time="2025-08-13T09:03:47.119473905Z" level=error msg="Failed to destroy network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.125577 containerd[1509]: time="2025-08-13T09:03:47.120102366Z" level=error msg="encountered an error cleaning up failed sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.125577 containerd[1509]: time="2025-08-13T09:03:47.120166166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c4cd9987c-7l522,Uid:5548a3fa-5234-4c05-aa6f-4b5c715b74b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.124777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff-shm.mount: Deactivated successfully. Aug 13 09:03:47.126623 kubelet[2681]: E0813 09:03:47.120435 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.126623 kubelet[2681]: E0813 09:03:47.120508 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c4cd9987c-7l522" Aug 13 09:03:47.126623 kubelet[2681]: E0813 09:03:47.120544 2681 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c4cd9987c-7l522" Aug 13 09:03:47.126817 kubelet[2681]: E0813 09:03:47.120604 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c4cd9987c-7l522_calico-system(5548a3fa-5234-4c05-aa6f-4b5c715b74b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c4cd9987c-7l522_calico-system(5548a3fa-5234-4c05-aa6f-4b5c715b74b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c4cd9987c-7l522" podUID="5548a3fa-5234-4c05-aa6f-4b5c715b74b3" Aug 13 09:03:47.486289 kubelet[2681]: I0813 09:03:47.484627 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:03:47.487363 containerd[1509]: time="2025-08-13T09:03:47.485923535Z" level=info msg="StopPodSandbox for \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\"" Aug 13 09:03:47.487363 containerd[1509]: time="2025-08-13T09:03:47.486182514Z" level=info msg="Ensure that sandbox 5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256 in task-service has been cleanup successfully" Aug 13 09:03:47.491353 kubelet[2681]: I0813 09:03:47.490285 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:03:47.492895 containerd[1509]: time="2025-08-13T09:03:47.492375605Z" level=info msg="StopPodSandbox for \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\"" Aug 13 09:03:47.492895 containerd[1509]: time="2025-08-13T09:03:47.492580306Z" level=info msg="Ensure that sandbox 84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff in task-service has been cleanup successfully" Aug 13 09:03:47.496354 kubelet[2681]: I0813 09:03:47.496325 2681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:03:47.498728 containerd[1509]: time="2025-08-13T09:03:47.498690667Z" level=info msg="StopPodSandbox for \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\"" Aug 13 09:03:47.499153 containerd[1509]: time="2025-08-13T09:03:47.499099576Z" level=info msg="Ensure that sandbox effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef in task-service has been cleanup successfully" Aug 13 09:03:47.555919 containerd[1509]: time="2025-08-13T09:03:47.555853378Z" level=error msg="StopPodSandbox for \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\" failed" error="failed to destroy network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.556557 kubelet[2681]: E0813 09:03:47.556358 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:03:47.556557 kubelet[2681]: E0813 09:03:47.556419 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef"} Aug 13 09:03:47.556557 kubelet[2681]: E0813 09:03:47.556474 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c09c3a84-7d45-4d14-9d0c-95a02288ec6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:47.556557 kubelet[2681]: E0813 09:03:47.556511 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c09c3a84-7d45-4d14-9d0c-95a02288ec6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c795f975d-rzm28" podUID="c09c3a84-7d45-4d14-9d0c-95a02288ec6b" Aug 13 09:03:47.561102 containerd[1509]: time="2025-08-13T09:03:47.560511106Z" level=error msg="StopPodSandbox for \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\" failed" error="failed to destroy network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.561195 kubelet[2681]: E0813 09:03:47.560705 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:03:47.561195 kubelet[2681]: E0813 09:03:47.560781 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff"} Aug 13 09:03:47.561195 kubelet[2681]: E0813 09:03:47.560860 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:47.561195 kubelet[2681]: E0813 09:03:47.560895 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c4cd9987c-7l522" podUID="5548a3fa-5234-4c05-aa6f-4b5c715b74b3" Aug 13 09:03:47.569476 containerd[1509]: time="2025-08-13T09:03:47.569428411Z" level=error msg="StopPodSandbox for \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\" failed" error="failed to destroy network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:47.570376 kubelet[2681]: E0813 09:03:47.570340 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:03:47.570493 kubelet[2681]: E0813 09:03:47.570387 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256"} Aug 13 09:03:47.570493 kubelet[2681]: E0813 09:03:47.570428 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:47.570649 kubelet[2681]: E0813 09:03:47.570485 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c795f975d-z5l2b" podUID="16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4" Aug 13 09:03:47.632765 kubelet[2681]: I0813 09:03:47.632709 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 09:03:49.119251 sshd[3401]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:03:49.535664 sshd[3808]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:03:51.280584 sshd[3401]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:03:51.488227 sshd[3401]: Received disconnect from 121.127.231.238 port 54822:11: [preauth] Aug 13 09:03:51.488227 sshd[3401]: Disconnected from authenticating user root 121.127.231.238 port 54822 [preauth] Aug 13 09:03:51.496972 systemd[1]: sshd@9-10.230.18.154:22-121.127.231.238:54822.service: Deactivated successfully. Aug 13 09:03:51.702588 systemd[1]: Started sshd@10-10.230.18.154:22-121.127.231.238:12104.service - OpenSSH per-connection server daemon (121.127.231.238:12104). Aug 13 09:03:53.259692 sshd[3818]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:03:55.888412 sshd[3812]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:03:56.283683 sshd[3819]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:03:57.233225 containerd[1509]: time="2025-08-13T09:03:57.230906248Z" level=info msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\"" Aug 13 09:03:57.369329 containerd[1509]: time="2025-08-13T09:03:57.367899550Z" level=error msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" failed" error="failed to destroy network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 09:03:57.370695 kubelet[2681]: E0813 09:03:57.370023 2681 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:03:57.373131 kubelet[2681]: E0813 09:03:57.370793 2681 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6"} Aug 13 09:03:57.373131 kubelet[2681]: E0813 09:03:57.371412 2681 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4fef4dc0-3721-4011-811d-4e894024c3f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 09:03:57.373131 kubelet[2681]: E0813 09:03:57.371477 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4fef4dc0-3721-4011-811d-4e894024c3f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" podUID="4fef4dc0-3721-4011-811d-4e894024c3f2" Aug 13 09:03:57.858968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1454174514.mount: Deactivated successfully. Aug 13 09:03:57.956951 containerd[1509]: time="2025-08-13T09:03:57.954883156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 09:03:57.967196 containerd[1509]: time="2025-08-13T09:03:57.966960318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 12.504641498s" Aug 13 09:03:57.967196 containerd[1509]: time="2025-08-13T09:03:57.967031336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 09:03:57.977820 containerd[1509]: time="2025-08-13T09:03:57.977597561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:57.988799 sshd[3812]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:03:58.050637 containerd[1509]: time="2025-08-13T09:03:58.050263417Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:58.051423 containerd[1509]: time="2025-08-13T09:03:58.051138455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:03:58.063987 containerd[1509]: time="2025-08-13T09:03:58.063493149Z" level=info msg="CreateContainer within sandbox \"e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 09:03:58.139163 containerd[1509]: time="2025-08-13T09:03:58.138471832Z" level=info msg="CreateContainer within sandbox \"e74c0e508aa16e1164b36a3b5a4ae537006283e4fae7596d5ce4cf3058bef4cb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5ee41ee7555ec48211abdce23f9a6f8f3236520d8a871e592a4a55eada14838d\"" Aug 13 09:03:58.145599 containerd[1509]: time="2025-08-13T09:03:58.143648875Z" level=info msg="StartContainer for \"5ee41ee7555ec48211abdce23f9a6f8f3236520d8a871e592a4a55eada14838d\"" Aug 13 09:03:58.328425 systemd[1]: Started cri-containerd-5ee41ee7555ec48211abdce23f9a6f8f3236520d8a871e592a4a55eada14838d.scope - libcontainer container 5ee41ee7555ec48211abdce23f9a6f8f3236520d8a871e592a4a55eada14838d. Aug 13 09:03:58.384432 sshd[3838]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:03:58.421721 containerd[1509]: time="2025-08-13T09:03:58.420995211Z" level=info msg="StartContainer for \"5ee41ee7555ec48211abdce23f9a6f8f3236520d8a871e592a4a55eada14838d\" returns successfully" Aug 13 09:03:58.675537 kubelet[2681]: I0813 09:03:58.597777 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4z2mc" podStartSLOduration=1.395497832 podStartE2EDuration="27.573395325s" podCreationTimestamp="2025-08-13 09:03:31 +0000 UTC" firstStartedPulling="2025-08-13 09:03:31.790929094 +0000 UTC m=+24.780944490" lastFinishedPulling="2025-08-13 09:03:57.968826592 +0000 UTC m=+50.958841983" observedRunningTime="2025-08-13 09:03:58.570138129 +0000 UTC m=+51.560153540" watchObservedRunningTime="2025-08-13 09:03:58.573395325 +0000 UTC m=+51.563410730" Aug 13 09:03:58.902200 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 09:03:58.902976 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 09:03:59.147723 containerd[1509]: time="2025-08-13T09:03:59.147639306Z" level=info msg="StopPodSandbox for \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\"" Aug 13 09:03:59.227379 containerd[1509]: time="2025-08-13T09:03:59.227325234Z" level=info msg="StopPodSandbox for \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\"" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.358 [INFO][3937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.358 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" iface="eth0" netns="/var/run/netns/cni-f558f266-ffe4-f70e-b54c-d0e8c2772456" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.359 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" iface="eth0" netns="/var/run/netns/cni-f558f266-ffe4-f70e-b54c-d0e8c2772456" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.360 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" iface="eth0" netns="/var/run/netns/cni-f558f266-ffe4-f70e-b54c-d0e8c2772456" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.360 [INFO][3937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.360 [INFO][3937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.696 [INFO][3947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.698 [INFO][3947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.699 [INFO][3947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.724 [WARNING][3947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.724 [INFO][3947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.732 [INFO][3947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:03:59.746526 containerd[1509]: 2025-08-13 09:03:59.738 [INFO][3937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:03:59.746526 containerd[1509]: time="2025-08-13T09:03:59.746475439Z" level=info msg="TearDown network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\" successfully" Aug 13 09:03:59.746526 containerd[1509]: time="2025-08-13T09:03:59.746539123Z" level=info msg="StopPodSandbox for \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\" returns successfully" Aug 13 09:03:59.763311 containerd[1509]: time="2025-08-13T09:03:59.756882643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v76x2,Uid:038dba4d-abe8-4783-8541-5e36b5853cd7,Namespace:calico-system,Attempt:1,}" Aug 13 09:03:59.755992 systemd[1]: run-netns-cni\x2df558f266\x2dffe4\x2df70e\x2db54c\x2dd0e8c2772456.mount: Deactivated successfully. Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.314 [INFO][3922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.316 [INFO][3922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" iface="eth0" netns="/var/run/netns/cni-985b4966-c25f-a30f-3069-b911a99a1f7d" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.317 [INFO][3922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" iface="eth0" netns="/var/run/netns/cni-985b4966-c25f-a30f-3069-b911a99a1f7d" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.319 [INFO][3922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" iface="eth0" netns="/var/run/netns/cni-985b4966-c25f-a30f-3069-b911a99a1f7d" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.320 [INFO][3922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.320 [INFO][3922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.695 [INFO][3944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.699 [INFO][3944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.732 [INFO][3944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.760 [WARNING][3944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.760 [INFO][3944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.766 [INFO][3944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:03:59.772834 containerd[1509]: 2025-08-13 09:03:59.770 [INFO][3922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:03:59.785233 containerd[1509]: time="2025-08-13T09:03:59.785163365Z" level=info msg="TearDown network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\" successfully" Aug 13 09:03:59.785448 containerd[1509]: time="2025-08-13T09:03:59.785417555Z" level=info msg="StopPodSandbox for \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\" returns successfully" Aug 13 09:03:59.786226 systemd[1]: run-netns-cni\x2d985b4966\x2dc25f\x2da30f\x2d3069\x2db911a99a1f7d.mount: Deactivated successfully. Aug 13 09:03:59.983272 kubelet[2681]: I0813 09:03:59.983222 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-ca-bundle\") pod \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\" (UID: \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\") " Aug 13 09:03:59.984008 kubelet[2681]: I0813 09:03:59.983301 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4f5z\" (UniqueName: \"kubernetes.io/projected/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-kube-api-access-w4f5z\") pod \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\" (UID: \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\") " Aug 13 09:03:59.984008 kubelet[2681]: I0813 09:03:59.983343 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-backend-key-pair\") pod \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\" (UID: \"5548a3fa-5234-4c05-aa6f-4b5c715b74b3\") " Aug 13 09:03:59.998737 kubelet[2681]: I0813 09:03:59.996970 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5548a3fa-5234-4c05-aa6f-4b5c715b74b3" (UID: "5548a3fa-5234-4c05-aa6f-4b5c715b74b3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 09:04:00.011100 kubelet[2681]: I0813 09:04:00.010585 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-kube-api-access-w4f5z" (OuterVolumeSpecName: "kube-api-access-w4f5z") pod "5548a3fa-5234-4c05-aa6f-4b5c715b74b3" (UID: "5548a3fa-5234-4c05-aa6f-4b5c715b74b3"). InnerVolumeSpecName "kube-api-access-w4f5z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 09:04:00.011986 systemd[1]: var-lib-kubelet-pods-5548a3fa\x2d5234\x2d4c05\x2daa6f\x2d4b5c715b74b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw4f5z.mount: Deactivated successfully. Aug 13 09:04:00.017135 kubelet[2681]: I0813 09:04:00.015492 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5548a3fa-5234-4c05-aa6f-4b5c715b74b3" (UID: "5548a3fa-5234-4c05-aa6f-4b5c715b74b3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 09:04:00.019377 systemd[1]: var-lib-kubelet-pods-5548a3fa\x2d5234\x2d4c05\x2daa6f\x2d4b5c715b74b3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 09:04:00.087598 kubelet[2681]: I0813 09:04:00.087450 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-backend-key-pair\") on node \"srv-cz57v.gb1.brightbox.com\" DevicePath \"\"" Aug 13 09:04:00.087598 kubelet[2681]: I0813 09:04:00.087523 2681 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-whisker-ca-bundle\") on node \"srv-cz57v.gb1.brightbox.com\" DevicePath \"\"" Aug 13 09:04:00.087598 kubelet[2681]: I0813 09:04:00.087547 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4f5z\" (UniqueName: \"kubernetes.io/projected/5548a3fa-5234-4c05-aa6f-4b5c715b74b3-kube-api-access-w4f5z\") on node \"srv-cz57v.gb1.brightbox.com\" DevicePath \"\"" Aug 13 09:04:00.215049 systemd-networkd[1428]: cali2a7eb558e09: Link UP Aug 13 09:04:00.215581 systemd-networkd[1428]: cali2a7eb558e09: Gained carrier Aug 13 09:04:00.230420 containerd[1509]: time="2025-08-13T09:04:00.230336137Z" level=info msg="StopPodSandbox for \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\"" Aug 13 09:04:00.235047 containerd[1509]: time="2025-08-13T09:04:00.234998489Z" level=info msg="StopPodSandbox for \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\"" Aug 13 09:04:00.244372 containerd[1509]: time="2025-08-13T09:04:00.244048095Z" level=info msg="StopPodSandbox for \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\"" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:03:59.930 [INFO][3979] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:03:59.968 [INFO][3979] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0 goldmane-768f4c5c69- calico-system 038dba4d-abe8-4783-8541-5e36b5853cd7 922 0 2025-08-13 09:03:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com goldmane-768f4c5c69-v76x2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2a7eb558e09 [] [] }} ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:03:59.968 [INFO][3979] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.050 [INFO][3993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" HandleID="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.051 [INFO][3993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" HandleID="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cz57v.gb1.brightbox.com", "pod":"goldmane-768f4c5c69-v76x2", "timestamp":"2025-08-13 09:04:00.050750671 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.051 [INFO][3993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.051 [INFO][3993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.051 [INFO][3993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.063 [INFO][3993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.078 [INFO][3993] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.114 [INFO][3993] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.119 [INFO][3993] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.122 [INFO][3993] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.128 [INFO][3993] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.137 [INFO][3993] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.137 [INFO][3993] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-cz57v.gb1.brightbox.com" subnet=192.168.114.192/26 Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.145 [INFO][3993] ipam/ipam_block_reader_writer.go 231: The block already exists, getting it from data store affinityType="host" host="srv-cz57v.gb1.brightbox.com" subnet=192.168.114.192/26 Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.149 [INFO][3993] ipam/ipam_block_reader_writer.go 247: Block is already claimed by this host, confirm the affinity affinityType="host" host="srv-cz57v.gb1.brightbox.com" subnet=192.168.114.192/26 Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.150 [INFO][3993] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-cz57v.gb1.brightbox.com" subnet=192.168.114.192/26 Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.154 [ERROR][3993] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(srv-cz57v.gb1.brightbox.com-192-168-114-192-26) Name="srv-cz57v.gb1.brightbox.com-192-168-114-192-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"srv-cz57v.gb1.brightbox.com-192-168-114-192-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"srv-cz57v.gb1.brightbox.com", Type:"host", CIDR:"192.168.114.192/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "srv-cz57v.gb1.brightbox.com-192-168-114-192-26": the object has been modified; please apply your changes to the latest version and try again Aug 13 09:04:00.277154 containerd[1509]: 2025-08-13 09:04:00.158 [INFO][3993] ipam/ipam_block_reader_writer.go 292: Affinity is already confirmed host="srv-cz57v.gb1.brightbox.com" subnet=192.168.114.192/26 Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.158 [INFO][3993] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.163 [INFO][3993] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9 Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.173 [INFO][3993] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.181 [INFO][3993] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.192/26] block=192.168.114.192/26 handle="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.181 [INFO][3993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.192/26] handle="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.181 [INFO][3993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.181 [INFO][3993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.192/26] IPv6=[] ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" HandleID="k8s-pod-network.658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.185 [INFO][3979] cni-plugin/k8s.go 418: Populated endpoint ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"038dba4d-abe8-4783-8541-5e36b5853cd7", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-768f4c5c69-v76x2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a7eb558e09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.185 [INFO][3979] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.192/32] ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.185 [INFO][3979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a7eb558e09 ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:00.281827 containerd[1509]: 2025-08-13 09:04:00.210 [INFO][3979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:00.282796 containerd[1509]: 2025-08-13 09:04:00.212 [INFO][3979] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"038dba4d-abe8-4783-8541-5e36b5853cd7", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9", Pod:"goldmane-768f4c5c69-v76x2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a7eb558e09", MAC:"46:1d:ee:4a:a0:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:00.282796 containerd[1509]: 2025-08-13 09:04:00.263 [INFO][3979] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9" Namespace="calico-system" Pod="goldmane-768f4c5c69-v76x2" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:00.400633 containerd[1509]: time="2025-08-13T09:04:00.400263130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:00.400633 containerd[1509]: time="2025-08-13T09:04:00.400370883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:00.400633 containerd[1509]: time="2025-08-13T09:04:00.400395068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:00.400892 containerd[1509]: time="2025-08-13T09:04:00.400561724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:00.523343 systemd[1]: Started cri-containerd-658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9.scope - libcontainer container 658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9. Aug 13 09:04:00.589146 systemd[1]: Removed slice kubepods-besteffort-pod5548a3fa_5234_4c05_aa6f_4b5c715b74b3.slice - libcontainer container kubepods-besteffort-pod5548a3fa_5234_4c05_aa6f_4b5c715b74b3.slice. Aug 13 09:04:00.696622 sshd[3812]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.508 [INFO][4041] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.511 [INFO][4041] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" iface="eth0" netns="/var/run/netns/cni-7118d8f6-2d69-898d-5a61-e7a677e84739" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.513 [INFO][4041] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" iface="eth0" netns="/var/run/netns/cni-7118d8f6-2d69-898d-5a61-e7a677e84739" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.517 [INFO][4041] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" iface="eth0" netns="/var/run/netns/cni-7118d8f6-2d69-898d-5a61-e7a677e84739" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.517 [INFO][4041] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.517 [INFO][4041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.662 [INFO][4099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.663 [INFO][4099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.663 [INFO][4099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.691 [WARNING][4099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.692 [INFO][4099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.698 [INFO][4099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:00.722666 containerd[1509]: 2025-08-13 09:04:00.710 [INFO][4041] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:00.728529 containerd[1509]: time="2025-08-13T09:04:00.727463663Z" level=info msg="TearDown network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\" successfully" Aug 13 09:04:00.730048 containerd[1509]: time="2025-08-13T09:04:00.729967016Z" level=info msg="StopPodSandbox for \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\" returns successfully" Aug 13 09:04:00.735368 containerd[1509]: time="2025-08-13T09:04:00.734749957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g96xv,Uid:acc500b1-7473-42bd-b48d-00d555107b78,Namespace:calico-system,Attempt:1,}" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.479 [INFO][4024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.484 [INFO][4024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" iface="eth0" netns="/var/run/netns/cni-7c66e8c3-6079-d2d7-6adc-d93d8fd8ce1b" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.485 [INFO][4024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" iface="eth0" netns="/var/run/netns/cni-7c66e8c3-6079-d2d7-6adc-d93d8fd8ce1b" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.487 [INFO][4024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" iface="eth0" netns="/var/run/netns/cni-7c66e8c3-6079-d2d7-6adc-d93d8fd8ce1b" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.487 [INFO][4024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.487 [INFO][4024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.705 [INFO][4092] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.707 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.707 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.728 [WARNING][4092] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.728 [INFO][4092] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.733 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:00.749668 containerd[1509]: 2025-08-13 09:04:00.738 [INFO][4024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:00.752035 containerd[1509]: time="2025-08-13T09:04:00.749775963Z" level=info msg="TearDown network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\" successfully" Aug 13 09:04:00.752035 containerd[1509]: time="2025-08-13T09:04:00.749823503Z" level=info msg="StopPodSandbox for \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\" returns successfully" Aug 13 09:04:00.752035 containerd[1509]: time="2025-08-13T09:04:00.751449362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpdgx,Uid:58f120ab-dea2-40e9-9eb7-22d71f6b8425,Namespace:kube-system,Attempt:1,}" Aug 13 09:04:00.818614 systemd[1]: Created slice kubepods-besteffort-pod403c2164_00d2_4b03_ab77_8e5ebd57bd87.slice - libcontainer container kubepods-besteffort-pod403c2164_00d2_4b03_ab77_8e5ebd57bd87.slice. Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.546 [INFO][4037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.547 [INFO][4037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" iface="eth0" netns="/var/run/netns/cni-6252a306-8daf-1f39-1c9d-d98f9c7e1d1a" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.547 [INFO][4037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" iface="eth0" netns="/var/run/netns/cni-6252a306-8daf-1f39-1c9d-d98f9c7e1d1a" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.553 [INFO][4037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" iface="eth0" netns="/var/run/netns/cni-6252a306-8daf-1f39-1c9d-d98f9c7e1d1a" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.553 [INFO][4037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.553 [INFO][4037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.763 [INFO][4112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.764 [INFO][4112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.764 [INFO][4112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.835 [WARNING][4112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.835 [INFO][4112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.848 [INFO][4112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:00.863966 containerd[1509]: 2025-08-13 09:04:00.855 [INFO][4037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:00.865369 containerd[1509]: time="2025-08-13T09:04:00.865159735Z" level=info msg="TearDown network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\" successfully" Aug 13 09:04:00.867131 containerd[1509]: time="2025-08-13T09:04:00.865359571Z" level=info msg="StopPodSandbox for \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\" returns successfully" Aug 13 09:04:00.866194 systemd[1]: run-containerd-runc-k8s.io-658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9-runc.3GEr5U.mount: Deactivated successfully. Aug 13 09:04:00.866340 systemd[1]: run-netns-cni\x2d7c66e8c3\x2d6079\x2dd2d7\x2d6adc\x2dd93d8fd8ce1b.mount: Deactivated successfully. Aug 13 09:04:00.866490 systemd[1]: run-netns-cni\x2d7118d8f6\x2d2d69\x2d898d\x2d5a61\x2de7a677e84739.mount: Deactivated successfully. Aug 13 09:04:00.878599 containerd[1509]: time="2025-08-13T09:04:00.872161007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-rzm28,Uid:c09c3a84-7d45-4d14-9d0c-95a02288ec6b,Namespace:calico-apiserver,Attempt:1,}" Aug 13 09:04:00.879318 systemd[1]: run-netns-cni\x2d6252a306\x2d8daf\x2d1f39\x2d1c9d\x2dd98f9c7e1d1a.mount: Deactivated successfully. Aug 13 09:04:00.896794 sshd[3812]: Received disconnect from 121.127.231.238 port 12104:11: [preauth] Aug 13 09:04:00.900287 sshd[3812]: Disconnected from authenticating user root 121.127.231.238 port 12104 [preauth] Aug 13 09:04:00.920472 kubelet[2681]: I0813 09:04:00.920399 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/403c2164-00d2-4b03-ab77-8e5ebd57bd87-whisker-ca-bundle\") pod \"whisker-6f659df8cf-rs4mw\" (UID: \"403c2164-00d2-4b03-ab77-8e5ebd57bd87\") " pod="calico-system/whisker-6f659df8cf-rs4mw" Aug 13 09:04:00.920634 kubelet[2681]: I0813 09:04:00.920493 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/403c2164-00d2-4b03-ab77-8e5ebd57bd87-whisker-backend-key-pair\") pod \"whisker-6f659df8cf-rs4mw\" (UID: \"403c2164-00d2-4b03-ab77-8e5ebd57bd87\") " pod="calico-system/whisker-6f659df8cf-rs4mw" Aug 13 09:04:00.920634 kubelet[2681]: I0813 09:04:00.920539 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvt5n\" (UniqueName: \"kubernetes.io/projected/403c2164-00d2-4b03-ab77-8e5ebd57bd87-kube-api-access-mvt5n\") pod \"whisker-6f659df8cf-rs4mw\" (UID: \"403c2164-00d2-4b03-ab77-8e5ebd57bd87\") " pod="calico-system/whisker-6f659df8cf-rs4mw" Aug 13 09:04:00.925002 systemd[1]: sshd@10-10.230.18.154:22-121.127.231.238:12104.service: Deactivated successfully. Aug 13 09:04:01.064874 containerd[1509]: time="2025-08-13T09:04:01.064134095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v76x2,Uid:038dba4d-abe8-4783-8541-5e36b5853cd7,Namespace:calico-system,Attempt:1,} returns sandbox id \"658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9\"" Aug 13 09:04:01.074789 containerd[1509]: time="2025-08-13T09:04:01.074739751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 09:04:01.121611 systemd[1]: Started sshd@11-10.230.18.154:22-121.127.231.238:17982.service - OpenSSH per-connection server daemon (121.127.231.238:17982). Aug 13 09:04:01.132170 containerd[1509]: time="2025-08-13T09:04:01.132105629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f659df8cf-rs4mw,Uid:403c2164-00d2-4b03-ab77-8e5ebd57bd87,Namespace:calico-system,Attempt:0,}" Aug 13 09:04:01.229850 containerd[1509]: time="2025-08-13T09:04:01.229796164Z" level=info msg="StopPodSandbox for \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\"" Aug 13 09:04:01.231151 containerd[1509]: time="2025-08-13T09:04:01.231119919Z" level=info msg="StopPodSandbox for \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\"" Aug 13 09:04:01.245792 kubelet[2681]: I0813 09:04:01.245055 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5548a3fa-5234-4c05-aa6f-4b5c715b74b3" path="/var/lib/kubelet/pods/5548a3fa-5234-4c05-aa6f-4b5c715b74b3/volumes" Aug 13 09:04:01.328632 systemd-networkd[1428]: calib5f22b124a8: Link UP Aug 13 09:04:01.337758 systemd-networkd[1428]: calib5f22b124a8: Gained carrier Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:00.860 [INFO][4141] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:00.936 [INFO][4141] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0 coredns-668d6bf9bc- kube-system 58f120ab-dea2-40e9-9eb7-22d71f6b8425 943 0 2025-08-13 09:03:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com coredns-668d6bf9bc-wpdgx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib5f22b124a8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:00.936 [INFO][4141] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.168 [INFO][4177] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" HandleID="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.168 [INFO][4177] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" HandleID="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a04a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-cz57v.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-wpdgx", "timestamp":"2025-08-13 09:04:01.168499677 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.168 [INFO][4177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.168 [INFO][4177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.169 [INFO][4177] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.183 [INFO][4177] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.196 [INFO][4177] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.222 [INFO][4177] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.231 [INFO][4177] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.239 [INFO][4177] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.239 [INFO][4177] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.247 [INFO][4177] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1 Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.260 [INFO][4177] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.273 [INFO][4177] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.194/26] block=192.168.114.192/26 handle="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.273 [INFO][4177] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.194/26] handle="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.274 [INFO][4177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:01.388493 containerd[1509]: 2025-08-13 09:04:01.274 [INFO][4177] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.194/26] IPv6=[] ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" HandleID="k8s-pod-network.1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:01.392900 containerd[1509]: 2025-08-13 09:04:01.294 [INFO][4141] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58f120ab-dea2-40e9-9eb7-22d71f6b8425", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-wpdgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5f22b124a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:01.392900 containerd[1509]: 2025-08-13 09:04:01.295 [INFO][4141] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.194/32] ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:01.392900 containerd[1509]: 2025-08-13 09:04:01.295 [INFO][4141] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5f22b124a8 ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:01.392900 containerd[1509]: 2025-08-13 09:04:01.340 [INFO][4141] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:01.392900 containerd[1509]: 2025-08-13 09:04:01.344 [INFO][4141] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58f120ab-dea2-40e9-9eb7-22d71f6b8425", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1", Pod:"coredns-668d6bf9bc-wpdgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5f22b124a8", MAC:"92:02:c4:aa:d8:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:01.392900 containerd[1509]: 2025-08-13 09:04:01.372 [INFO][4141] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wpdgx" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:01.498036 containerd[1509]: time="2025-08-13T09:04:01.493235233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:01.498036 containerd[1509]: time="2025-08-13T09:04:01.496425151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:01.498036 containerd[1509]: time="2025-08-13T09:04:01.496451352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:01.498036 containerd[1509]: time="2025-08-13T09:04:01.496624650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:01.544782 systemd-networkd[1428]: cali33a101dbcc4: Link UP Aug 13 09:04:01.547295 systemd-networkd[1428]: cali33a101dbcc4: Gained carrier Aug 13 09:04:01.617348 systemd[1]: Started cri-containerd-1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1.scope - libcontainer container 1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1. Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:00.995 [INFO][4135] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.055 [INFO][4135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0 csi-node-driver- calico-system acc500b1-7473-42bd-b48d-00d555107b78 944 0 2025-08-13 09:03:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com csi-node-driver-g96xv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali33a101dbcc4 [] [] }} ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.060 [INFO][4135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.376 [INFO][4199] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" HandleID="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.377 [INFO][4199] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" HandleID="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032aac0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cz57v.gb1.brightbox.com", "pod":"csi-node-driver-g96xv", "timestamp":"2025-08-13 09:04:01.37675013 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.377 [INFO][4199] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.377 [INFO][4199] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.377 [INFO][4199] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.408 [INFO][4199] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.432 [INFO][4199] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.446 [INFO][4199] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.461 [INFO][4199] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.470 [INFO][4199] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.470 [INFO][4199] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.473 [INFO][4199] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9 Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.488 [INFO][4199] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.515 [INFO][4199] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.195/26] block=192.168.114.192/26 handle="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.515 [INFO][4199] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.195/26] handle="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.515 [INFO][4199] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:01.636093 containerd[1509]: 2025-08-13 09:04:01.515 [INFO][4199] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.195/26] IPv6=[] ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" HandleID="k8s-pod-network.e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:01.639845 containerd[1509]: 2025-08-13 09:04:01.525 [INFO][4135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"acc500b1-7473-42bd-b48d-00d555107b78", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-g96xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33a101dbcc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:01.639845 containerd[1509]: 2025-08-13 09:04:01.526 [INFO][4135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.195/32] ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:01.639845 containerd[1509]: 2025-08-13 09:04:01.526 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33a101dbcc4 ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:01.639845 containerd[1509]: 2025-08-13 09:04:01.551 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:01.639845 containerd[1509]: 2025-08-13 09:04:01.558 [INFO][4135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"acc500b1-7473-42bd-b48d-00d555107b78", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9", Pod:"csi-node-driver-g96xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33a101dbcc4", MAC:"7e:89:99:be:35:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:01.639845 containerd[1509]: 2025-08-13 09:04:01.632 [INFO][4135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9" Namespace="calico-system" Pod="csi-node-driver-g96xv" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:01.808263 systemd-networkd[1428]: cali38ca3265cd9: Link UP Aug 13 09:04:01.814878 systemd-networkd[1428]: cali38ca3265cd9: Gained carrier Aug 13 09:04:01.893171 containerd[1509]: time="2025-08-13T09:04:01.892345454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wpdgx,Uid:58f120ab-dea2-40e9-9eb7-22d71f6b8425,Namespace:kube-system,Attempt:1,} returns sandbox id \"1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1\"" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.083 [INFO][4167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.143 [INFO][4167] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0 calico-apiserver-5c795f975d- calico-apiserver c09c3a84-7d45-4d14-9d0c-95a02288ec6b 945 0 2025-08-13 09:03:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c795f975d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com calico-apiserver-5c795f975d-rzm28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38ca3265cd9 [] [] }} ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.144 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.541 [INFO][4217] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" HandleID="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.541 [INFO][4217] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" HandleID="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001231a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-cz57v.gb1.brightbox.com", "pod":"calico-apiserver-5c795f975d-rzm28", "timestamp":"2025-08-13 09:04:01.541046446 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.541 [INFO][4217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.541 [INFO][4217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.541 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.614 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.654 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.684 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.700 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.719 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.721 [INFO][4217] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.728 [INFO][4217] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210 Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.744 [INFO][4217] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.771 [INFO][4217] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.196/26] block=192.168.114.192/26 handle="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.771 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.196/26] handle="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.771 [INFO][4217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:01.902938 containerd[1509]: 2025-08-13 09:04:01.771 [INFO][4217] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.196/26] IPv6=[] ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" HandleID="k8s-pod-network.a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:01.908429 containerd[1509]: 2025-08-13 09:04:01.784 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09c3a84-7d45-4d14-9d0c-95a02288ec6b", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-5c795f975d-rzm28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38ca3265cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:01.908429 containerd[1509]: 2025-08-13 09:04:01.785 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.196/32] ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:01.908429 containerd[1509]: 2025-08-13 09:04:01.785 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38ca3265cd9 ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:01.908429 containerd[1509]: 2025-08-13 09:04:01.824 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:01.908429 containerd[1509]: 2025-08-13 09:04:01.829 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09c3a84-7d45-4d14-9d0c-95a02288ec6b", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210", Pod:"calico-apiserver-5c795f975d-rzm28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38ca3265cd9", MAC:"a6:96:17:dd:bb:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:01.908429 containerd[1509]: 2025-08-13 09:04:01.884 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-rzm28" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:01.921299 containerd[1509]: time="2025-08-13T09:04:01.909330324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:01.921299 containerd[1509]: time="2025-08-13T09:04:01.909485997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:01.921299 containerd[1509]: time="2025-08-13T09:04:01.909512955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:01.921299 containerd[1509]: time="2025-08-13T09:04:01.911728651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:01.944796 containerd[1509]: time="2025-08-13T09:04:01.944708152Z" level=info msg="CreateContainer within sandbox \"1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 09:04:02.033670 systemd-networkd[1428]: cali2a7eb558e09: Gained IPv6LL Aug 13 09:04:02.044394 systemd[1]: Started cri-containerd-e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9.scope - libcontainer container e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9. Aug 13 09:04:02.050787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256936908.mount: Deactivated successfully. Aug 13 09:04:02.067835 containerd[1509]: time="2025-08-13T09:04:02.067350575Z" level=info msg="CreateContainer within sandbox \"1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"702dce6a5b1c473901fe01c8d76af1e19943a67f0ec4336b37ab5b90eb343a2b\"" Aug 13 09:04:02.072895 containerd[1509]: time="2025-08-13T09:04:02.071642093Z" level=info msg="StartContainer for \"702dce6a5b1c473901fe01c8d76af1e19943a67f0ec4336b37ab5b90eb343a2b\"" Aug 13 09:04:02.090321 systemd-networkd[1428]: cali9805613377e: Link UP Aug 13 09:04:02.092203 systemd-networkd[1428]: cali9805613377e: Gained carrier Aug 13 09:04:02.121118 containerd[1509]: time="2025-08-13T09:04:02.119118406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:02.121118 containerd[1509]: time="2025-08-13T09:04:02.119326313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:02.121118 containerd[1509]: time="2025-08-13T09:04:02.119407066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:02.121118 containerd[1509]: time="2025-08-13T09:04:02.120295325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.385 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.443 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0 whisker-6f659df8cf- calico-system 403c2164-00d2-4b03-ab77-8e5ebd57bd87 958 0 2025-08-13 09:04:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f659df8cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com whisker-6f659df8cf-rs4mw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9805613377e [] [] }} ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.443 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.743 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" HandleID="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.749 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" HandleID="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cz57v.gb1.brightbox.com", "pod":"whisker-6f659df8cf-rs4mw", "timestamp":"2025-08-13 09:04:01.743839369 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.749 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.771 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.775 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.841 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.897 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.941 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.958 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.977 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.977 [INFO][4277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:01.990 [INFO][4277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091 Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:02.024 [INFO][4277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:02.070 [INFO][4277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.197/26] block=192.168.114.192/26 handle="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:02.070 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.197/26] handle="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:02.071 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:02.189104 containerd[1509]: 2025-08-13 09:04:02.071 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.197/26] IPv6=[] ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" HandleID="k8s-pod-network.ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" Aug 13 09:04:02.194965 containerd[1509]: 2025-08-13 09:04:02.079 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0", GenerateName:"whisker-6f659df8cf-", Namespace:"calico-system", SelfLink:"", UID:"403c2164-00d2-4b03-ab77-8e5ebd57bd87", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f659df8cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6f659df8cf-rs4mw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9805613377e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:02.194965 containerd[1509]: 2025-08-13 09:04:02.079 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.197/32] ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" Aug 13 09:04:02.194965 containerd[1509]: 2025-08-13 09:04:02.079 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9805613377e ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" Aug 13 09:04:02.194965 containerd[1509]: 2025-08-13 09:04:02.092 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" Aug 13 09:04:02.194965 containerd[1509]: 2025-08-13 09:04:02.124 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0", GenerateName:"whisker-6f659df8cf-", Namespace:"calico-system", SelfLink:"", UID:"403c2164-00d2-4b03-ab77-8e5ebd57bd87", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 4, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f659df8cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091", Pod:"whisker-6f659df8cf-rs4mw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9805613377e", MAC:"ce:d7:a3:a3:05:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:02.194965 containerd[1509]: 2025-08-13 09:04:02.166 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091" Namespace="calico-system" Pod="whisker-6f659df8cf-rs4mw" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6f659df8cf--rs4mw-eth0" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.512 [INFO][4245] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.512 [INFO][4245] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" iface="eth0" netns="/var/run/netns/cni-22396854-8940-a49a-927a-268a479cd203" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.513 [INFO][4245] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" iface="eth0" netns="/var/run/netns/cni-22396854-8940-a49a-927a-268a479cd203" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.516 [INFO][4245] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" iface="eth0" netns="/var/run/netns/cni-22396854-8940-a49a-927a-268a479cd203" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.516 [INFO][4245] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.516 [INFO][4245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.848 [INFO][4289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:01.849 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:02.072 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:02.156 [WARNING][4289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:02.157 [INFO][4289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:02.171 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:02.218102 containerd[1509]: 2025-08-13 09:04:02.199 [INFO][4245] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:02.217790 systemd[1]: Started cri-containerd-a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210.scope - libcontainer container a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210. Aug 13 09:04:02.222873 containerd[1509]: time="2025-08-13T09:04:02.221687109Z" level=info msg="TearDown network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\" successfully" Aug 13 09:04:02.222873 containerd[1509]: time="2025-08-13T09:04:02.221739268Z" level=info msg="StopPodSandbox for \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\" returns successfully" Aug 13 09:04:02.225631 containerd[1509]: time="2025-08-13T09:04:02.224395008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-z5l2b,Uid:16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4,Namespace:calico-apiserver,Attempt:1,}" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:01.709 [INFO][4242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:01.710 [INFO][4242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" iface="eth0" netns="/var/run/netns/cni-b69843f1-979a-9dcb-ce57-061ed5372111" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:01.710 [INFO][4242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" iface="eth0" netns="/var/run/netns/cni-b69843f1-979a-9dcb-ce57-061ed5372111" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:01.711 [INFO][4242] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" iface="eth0" netns="/var/run/netns/cni-b69843f1-979a-9dcb-ce57-061ed5372111" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:01.711 [INFO][4242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:01.711 [INFO][4242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:02.014 [INFO][4343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:02.022 [INFO][4343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:02.171 [INFO][4343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:02.216 [WARNING][4343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:02.216 [INFO][4343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:02.238 [INFO][4343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:02.270961 containerd[1509]: 2025-08-13 09:04:02.253 [INFO][4242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:02.276656 containerd[1509]: time="2025-08-13T09:04:02.276192982Z" level=info msg="TearDown network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\" successfully" Aug 13 09:04:02.276656 containerd[1509]: time="2025-08-13T09:04:02.276244698Z" level=info msg="StopPodSandbox for \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\" returns successfully" Aug 13 09:04:02.280576 containerd[1509]: time="2025-08-13T09:04:02.280185646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54ftd,Uid:7136f5b0-24ac-4259-bcbd-579709672a99,Namespace:kube-system,Attempt:1,}" Aug 13 09:04:02.347846 containerd[1509]: time="2025-08-13T09:04:02.346791881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:02.347846 containerd[1509]: time="2025-08-13T09:04:02.346885665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:02.347846 containerd[1509]: time="2025-08-13T09:04:02.346904251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:02.347846 containerd[1509]: time="2025-08-13T09:04:02.347059807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:02.403312 systemd[1]: Started cri-containerd-702dce6a5b1c473901fe01c8d76af1e19943a67f0ec4336b37ab5b90eb343a2b.scope - libcontainer container 702dce6a5b1c473901fe01c8d76af1e19943a67f0ec4336b37ab5b90eb343a2b. Aug 13 09:04:02.418327 containerd[1509]: time="2025-08-13T09:04:02.418280769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g96xv,Uid:acc500b1-7473-42bd-b48d-00d555107b78,Namespace:calico-system,Attempt:1,} returns sandbox id \"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9\"" Aug 13 09:04:02.465837 systemd[1]: Started cri-containerd-ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091.scope - libcontainer container ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091. Aug 13 09:04:02.545940 containerd[1509]: time="2025-08-13T09:04:02.545328891Z" level=info msg="StartContainer for \"702dce6a5b1c473901fe01c8d76af1e19943a67f0ec4336b37ab5b90eb343a2b\" returns successfully" Aug 13 09:04:02.667933 containerd[1509]: time="2025-08-13T09:04:02.667242594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-rzm28,Uid:c09c3a84-7d45-4d14-9d0c-95a02288ec6b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210\"" Aug 13 09:04:02.673940 systemd-networkd[1428]: calib5f22b124a8: Gained IPv6LL Aug 13 09:04:02.677937 sshd[4567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:04:02.870746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250177269.mount: Deactivated successfully. Aug 13 09:04:02.871250 systemd[1]: run-netns-cni\x2d22396854\x2d8940\x2da49a\x2d927a\x2d268a479cd203.mount: Deactivated successfully. Aug 13 09:04:02.871369 systemd[1]: run-netns-cni\x2db69843f1\x2d979a\x2d9dcb\x2dce57\x2d061ed5372111.mount: Deactivated successfully. Aug 13 09:04:02.930305 systemd-networkd[1428]: cali38ca3265cd9: Gained IPv6LL Aug 13 09:04:03.002252 systemd-networkd[1428]: calie0fda29b45e: Link UP Aug 13 09:04:03.006361 systemd-networkd[1428]: calie0fda29b45e: Gained carrier Aug 13 09:04:03.042105 kubelet[2681]: I0813 09:04:03.040949 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wpdgx" podStartSLOduration=50.040889519 podStartE2EDuration="50.040889519s" podCreationTimestamp="2025-08-13 09:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 09:04:02.752314592 +0000 UTC m=+55.742330020" watchObservedRunningTime="2025-08-13 09:04:03.040889519 +0000 UTC m=+56.030904924" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.610 [INFO][4556] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.671 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0 coredns-668d6bf9bc- kube-system 7136f5b0-24ac-4259-bcbd-579709672a99 974 0 2025-08-13 09:03:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com coredns-668d6bf9bc-54ftd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0fda29b45e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.671 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.833 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" HandleID="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.833 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" HandleID="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003753b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-cz57v.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-54ftd", "timestamp":"2025-08-13 09:04:02.833008574 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.833 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.833 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.833 [INFO][4601] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.866 [INFO][4601] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.886 [INFO][4601] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.904 [INFO][4601] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.909 [INFO][4601] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.916 [INFO][4601] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.917 [INFO][4601] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.922 [INFO][4601] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492 Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.943 [INFO][4601] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.979 [INFO][4601] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.198/26] block=192.168.114.192/26 handle="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.979 [INFO][4601] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.198/26] handle="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.979 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:03.051126 containerd[1509]: 2025-08-13 09:04:02.980 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.198/26] IPv6=[] ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" HandleID="k8s-pod-network.4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:03.055650 containerd[1509]: 2025-08-13 09:04:02.987 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7136f5b0-24ac-4259-bcbd-579709672a99", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-54ftd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fda29b45e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:03.055650 containerd[1509]: 2025-08-13 09:04:02.988 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.198/32] ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:03.055650 containerd[1509]: 2025-08-13 09:04:02.992 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0fda29b45e ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:03.055650 containerd[1509]: 2025-08-13 09:04:03.009 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:03.055650 containerd[1509]: 2025-08-13 09:04:03.011 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7136f5b0-24ac-4259-bcbd-579709672a99", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492", Pod:"coredns-668d6bf9bc-54ftd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fda29b45e", MAC:"7a:08:b2:d7:9c:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:03.055650 containerd[1509]: 2025-08-13 09:04:03.043 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492" Namespace="kube-system" Pod="coredns-668d6bf9bc-54ftd" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:03.121400 systemd-networkd[1428]: cali33a101dbcc4: Gained IPv6LL Aug 13 09:04:03.161797 containerd[1509]: time="2025-08-13T09:04:03.161521894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f659df8cf-rs4mw,Uid:403c2164-00d2-4b03-ab77-8e5ebd57bd87,Namespace:calico-system,Attempt:0,} returns sandbox id \"ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091\"" Aug 13 09:04:03.162379 systemd-networkd[1428]: cali2b0f5fa0500: Link UP Aug 13 09:04:03.166150 systemd-networkd[1428]: cali2b0f5fa0500: Gained carrier Aug 13 09:04:03.191877 containerd[1509]: time="2025-08-13T09:04:03.191272616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:03.191877 containerd[1509]: time="2025-08-13T09:04:03.191343954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:03.191877 containerd[1509]: time="2025-08-13T09:04:03.191362138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:03.191877 containerd[1509]: time="2025-08-13T09:04:03.191485950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.580 [INFO][4514] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.683 [INFO][4514] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0 calico-apiserver-5c795f975d- calico-apiserver 16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4 971 0 2025-08-13 09:03:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c795f975d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com calico-apiserver-5c795f975d-z5l2b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b0f5fa0500 [] [] }} ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.683 [INFO][4514] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.846 [INFO][4607] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" HandleID="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.848 [INFO][4607] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" HandleID="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d3640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-cz57v.gb1.brightbox.com", "pod":"calico-apiserver-5c795f975d-z5l2b", "timestamp":"2025-08-13 09:04:02.846820288 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.848 [INFO][4607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.980 [INFO][4607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:02.981 [INFO][4607] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.007 [INFO][4607] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.027 [INFO][4607] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.055 [INFO][4607] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.068 [INFO][4607] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.080 [INFO][4607] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.081 [INFO][4607] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.086 [INFO][4607] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89 Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.101 [INFO][4607] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.140 [INFO][4607] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.199/26] block=192.168.114.192/26 handle="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.142 [INFO][4607] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.199/26] handle="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.142 [INFO][4607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:03.216358 containerd[1509]: 2025-08-13 09:04:03.142 [INFO][4607] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.199/26] IPv6=[] ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" HandleID="k8s-pod-network.197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:03.220455 containerd[1509]: 2025-08-13 09:04:03.150 [INFO][4514] cni-plugin/k8s.go 418: Populated endpoint ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-5c795f975d-z5l2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b0f5fa0500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:03.220455 containerd[1509]: 2025-08-13 09:04:03.150 [INFO][4514] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.199/32] ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:03.220455 containerd[1509]: 2025-08-13 09:04:03.151 [INFO][4514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b0f5fa0500 ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:03.220455 containerd[1509]: 2025-08-13 09:04:03.172 [INFO][4514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:03.220455 containerd[1509]: 2025-08-13 09:04:03.176 [INFO][4514] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89", Pod:"calico-apiserver-5c795f975d-z5l2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b0f5fa0500", MAC:"1a:90:fc:5e:bb:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:03.220455 containerd[1509]: 2025-08-13 09:04:03.203 [INFO][4514] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89" Namespace="calico-apiserver" Pod="calico-apiserver-5c795f975d-z5l2b" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:03.283957 systemd[1]: run-containerd-runc-k8s.io-4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492-runc.ETLO64.mount: Deactivated successfully. Aug 13 09:04:03.304711 systemd[1]: Started cri-containerd-4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492.scope - libcontainer container 4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492. Aug 13 09:04:03.441291 containerd[1509]: time="2025-08-13T09:04:03.418179673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:03.441291 containerd[1509]: time="2025-08-13T09:04:03.418286270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:03.441291 containerd[1509]: time="2025-08-13T09:04:03.418376195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:03.441291 containerd[1509]: time="2025-08-13T09:04:03.418611855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:03.522992 systemd[1]: Started cri-containerd-197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89.scope - libcontainer container 197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89. Aug 13 09:04:03.551764 containerd[1509]: time="2025-08-13T09:04:03.551661454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-54ftd,Uid:7136f5b0-24ac-4259-bcbd-579709672a99,Namespace:kube-system,Attempt:1,} returns sandbox id \"4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492\"" Aug 13 09:04:03.718677 containerd[1509]: time="2025-08-13T09:04:03.718495584Z" level=info msg="CreateContainer within sandbox \"4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 09:04:03.739647 containerd[1509]: time="2025-08-13T09:04:03.739584164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c795f975d-z5l2b,Uid:16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89\"" Aug 13 09:04:03.877661 containerd[1509]: time="2025-08-13T09:04:03.877560605Z" level=info msg="CreateContainer within sandbox \"4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99557d549443cd9a73c843d28e08b52eb598e49f69ebdf87d2752d946406e066\"" Aug 13 09:04:03.907542 containerd[1509]: time="2025-08-13T09:04:03.907480453Z" level=info msg="StartContainer for \"99557d549443cd9a73c843d28e08b52eb598e49f69ebdf87d2752d946406e066\"" Aug 13 09:04:04.061359 systemd[1]: Started cri-containerd-99557d549443cd9a73c843d28e08b52eb598e49f69ebdf87d2752d946406e066.scope - libcontainer container 99557d549443cd9a73c843d28e08b52eb598e49f69ebdf87d2752d946406e066. Aug 13 09:04:04.081531 systemd-networkd[1428]: cali9805613377e: Gained IPv6LL Aug 13 09:04:04.198156 containerd[1509]: time="2025-08-13T09:04:04.197676787Z" level=info msg="StartContainer for \"99557d549443cd9a73c843d28e08b52eb598e49f69ebdf87d2752d946406e066\" returns successfully" Aug 13 09:04:04.212461 systemd-networkd[1428]: cali2b0f5fa0500: Gained IPv6LL Aug 13 09:04:04.498168 kernel: bpftool[4779]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 09:04:04.869877 kubelet[2681]: I0813 09:04:04.869555 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-54ftd" podStartSLOduration=51.869398276 podStartE2EDuration="51.869398276s" podCreationTimestamp="2025-08-13 09:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 09:04:04.840661813 +0000 UTC m=+57.830677238" watchObservedRunningTime="2025-08-13 09:04:04.869398276 +0000 UTC m=+57.859413676" Aug 13 09:04:04.874413 sshd[4204]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:04:04.977970 systemd-networkd[1428]: calie0fda29b45e: Gained IPv6LL Aug 13 09:04:05.270183 systemd-networkd[1428]: vxlan.calico: Link UP Aug 13 09:04:05.271359 systemd-networkd[1428]: vxlan.calico: Gained carrier Aug 13 09:04:05.294457 sshd[4824]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:04:06.246435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627591904.mount: Deactivated successfully. Aug 13 09:04:06.897433 systemd-networkd[1428]: vxlan.calico: Gained IPv6LL Aug 13 09:04:07.398340 containerd[1509]: time="2025-08-13T09:04:07.398138190Z" level=info msg="StopPodSandbox for \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\"" Aug 13 09:04:07.559358 containerd[1509]: time="2025-08-13T09:04:07.493158354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 09:04:07.570835 containerd[1509]: time="2025-08-13T09:04:07.569868634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:07.575121 containerd[1509]: time="2025-08-13T09:04:07.575054214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.500214656s" Aug 13 09:04:07.579112 containerd[1509]: time="2025-08-13T09:04:07.576747250Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:07.580581 containerd[1509]: time="2025-08-13T09:04:07.580524953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:07.586264 containerd[1509]: time="2025-08-13T09:04:07.586215403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 09:04:07.601985 containerd[1509]: time="2025-08-13T09:04:07.601933383Z" level=info msg="CreateContainer within sandbox \"658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 09:04:07.603454 containerd[1509]: time="2025-08-13T09:04:07.603218802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 09:04:07.654537 containerd[1509]: time="2025-08-13T09:04:07.654295087Z" level=info msg="CreateContainer within sandbox \"658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c4c71d12e42015873ec3da1ffbfcb234e55daff4443a01ef3ba1359755b79986\"" Aug 13 09:04:07.655496 containerd[1509]: time="2025-08-13T09:04:07.655377183Z" level=info msg="StartContainer for \"c4c71d12e42015873ec3da1ffbfcb234e55daff4443a01ef3ba1359755b79986\"" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.653 [WARNING][4915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"038dba4d-abe8-4783-8541-5e36b5853cd7", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9", Pod:"goldmane-768f4c5c69-v76x2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a7eb558e09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.655 [INFO][4915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.655 [INFO][4915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" iface="eth0" netns="" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.655 [INFO][4915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.655 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.718 [INFO][4927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.719 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.719 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.749 [WARNING][4927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.749 [INFO][4927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.755 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:07.762123 containerd[1509]: 2025-08-13 09:04:07.758 [INFO][4915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:07.768839 containerd[1509]: time="2025-08-13T09:04:07.762205107Z" level=info msg="TearDown network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\" successfully" Aug 13 09:04:07.768839 containerd[1509]: time="2025-08-13T09:04:07.762243719Z" level=info msg="StopPodSandbox for \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\" returns successfully" Aug 13 09:04:07.813089 containerd[1509]: time="2025-08-13T09:04:07.812981953Z" level=info msg="RemovePodSandbox for \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\"" Aug 13 09:04:07.821522 containerd[1509]: time="2025-08-13T09:04:07.821054523Z" level=info msg="Forcibly stopping sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\"" Aug 13 09:04:07.831407 systemd[1]: Started cri-containerd-c4c71d12e42015873ec3da1ffbfcb234e55daff4443a01ef3ba1359755b79986.scope - libcontainer container c4c71d12e42015873ec3da1ffbfcb234e55daff4443a01ef3ba1359755b79986. Aug 13 09:04:07.902820 sshd[4204]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:04:07.958342 containerd[1509]: time="2025-08-13T09:04:07.955990760Z" level=info msg="StartContainer for \"c4c71d12e42015873ec3da1ffbfcb234e55daff4443a01ef3ba1359755b79986\" returns successfully" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:07.922 [WARNING][4961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"038dba4d-abe8-4783-8541-5e36b5853cd7", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"658eac59474d3f7c4bb0ce3d858d9cefc34199370ed0a50067a34a4698f7d7c9", Pod:"goldmane-768f4c5c69-v76x2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2a7eb558e09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:07.923 [INFO][4961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:07.923 [INFO][4961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" iface="eth0" netns="" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:07.923 [INFO][4961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:07.923 [INFO][4961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:08.029 [INFO][4976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:08.030 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:08.030 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:08.042 [WARNING][4976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:08.042 [INFO][4976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" HandleID="k8s-pod-network.6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Workload="srv--cz57v.gb1.brightbox.com-k8s-goldmane--768f4c5c69--v76x2-eth0" Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:08.044 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:08.052064 containerd[1509]: 2025-08-13 09:04:08.048 [INFO][4961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244" Aug 13 09:04:08.052064 containerd[1509]: time="2025-08-13T09:04:08.051914824Z" level=info msg="TearDown network for sandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\" successfully" Aug 13 09:04:08.198776 containerd[1509]: time="2025-08-13T09:04:08.197416255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:04:08.220922 containerd[1509]: time="2025-08-13T09:04:08.219984649Z" level=info msg="RemovePodSandbox \"6c987bb4625d9c119e0babc3256cd447e46d86e5de7d705736d0706f02474244\" returns successfully" Aug 13 09:04:08.221192 containerd[1509]: time="2025-08-13T09:04:08.221066301Z" level=info msg="StopPodSandbox for \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\"" Aug 13 09:04:08.317548 sshd[4996]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=121.127.231.238 user=root Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.279 [WARNING][5006] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09c3a84-7d45-4d14-9d0c-95a02288ec6b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210", Pod:"calico-apiserver-5c795f975d-rzm28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38ca3265cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.282 [INFO][5006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.282 [INFO][5006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" iface="eth0" netns="" Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.282 [INFO][5006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.282 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.335 [INFO][5013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.336 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.336 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.353 [WARNING][5013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.353 [INFO][5013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.355 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:08.360129 containerd[1509]: 2025-08-13 09:04:08.358 [INFO][5006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.360129 containerd[1509]: time="2025-08-13T09:04:08.359936311Z" level=info msg="TearDown network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\" successfully" Aug 13 09:04:08.360129 containerd[1509]: time="2025-08-13T09:04:08.359973133Z" level=info msg="StopPodSandbox for \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\" returns successfully" Aug 13 09:04:08.362216 containerd[1509]: time="2025-08-13T09:04:08.360615532Z" level=info msg="RemovePodSandbox for \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\"" Aug 13 09:04:08.362216 containerd[1509]: time="2025-08-13T09:04:08.360653144Z" level=info msg="Forcibly stopping sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\"" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.427 [WARNING][5027] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09c3a84-7d45-4d14-9d0c-95a02288ec6b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210", Pod:"calico-apiserver-5c795f975d-rzm28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38ca3265cd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.428 [INFO][5027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.428 [INFO][5027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" iface="eth0" netns="" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.428 [INFO][5027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.428 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.462 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.463 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.463 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.473 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.473 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" HandleID="k8s-pod-network.effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--rzm28-eth0" Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.476 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:08.480804 containerd[1509]: 2025-08-13 09:04:08.478 [INFO][5027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef" Aug 13 09:04:08.485784 containerd[1509]: time="2025-08-13T09:04:08.481271044Z" level=info msg="TearDown network for sandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\" successfully" Aug 13 09:04:08.487486 containerd[1509]: time="2025-08-13T09:04:08.487249666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:04:08.487486 containerd[1509]: time="2025-08-13T09:04:08.487337287Z" level=info msg="RemovePodSandbox \"effb07d8b8e5c6b94470749fb485dad34087aefc3ab9c70df20f3abbeee51eef\" returns successfully" Aug 13 09:04:08.488644 containerd[1509]: time="2025-08-13T09:04:08.488252651Z" level=info msg="StopPodSandbox for \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\"" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.544 [WARNING][5048] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7136f5b0-24ac-4259-bcbd-579709672a99", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492", Pod:"coredns-668d6bf9bc-54ftd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fda29b45e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.545 [INFO][5048] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.545 [INFO][5048] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" iface="eth0" netns="" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.545 [INFO][5048] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.545 [INFO][5048] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.587 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.587 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.587 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.600 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.600 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.602 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:08.606598 containerd[1509]: 2025-08-13 09:04:08.604 [INFO][5048] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.608970 containerd[1509]: time="2025-08-13T09:04:08.607288732Z" level=info msg="TearDown network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\" successfully" Aug 13 09:04:08.608970 containerd[1509]: time="2025-08-13T09:04:08.607326953Z" level=info msg="StopPodSandbox for \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\" returns successfully" Aug 13 09:04:08.608970 containerd[1509]: time="2025-08-13T09:04:08.608434470Z" level=info msg="RemovePodSandbox for \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\"" Aug 13 09:04:08.608970 containerd[1509]: time="2025-08-13T09:04:08.608471479Z" level=info msg="Forcibly stopping sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\"" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.659 [WARNING][5070] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7136f5b0-24ac-4259-bcbd-579709672a99", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"4ff90add781c6b67a79cd268d4298f0fecf51e166bb453fd868347747bdd6492", Pod:"coredns-668d6bf9bc-54ftd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0fda29b45e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.660 [INFO][5070] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.660 [INFO][5070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" iface="eth0" netns="" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.660 [INFO][5070] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.660 [INFO][5070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.694 [INFO][5077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.694 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.694 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.707 [WARNING][5077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.707 [INFO][5077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" HandleID="k8s-pod-network.76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--54ftd-eth0" Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.710 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:08.715100 containerd[1509]: 2025-08-13 09:04:08.712 [INFO][5070] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12" Aug 13 09:04:08.715100 containerd[1509]: time="2025-08-13T09:04:08.714967818Z" level=info msg="TearDown network for sandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\" successfully" Aug 13 09:04:08.721915 containerd[1509]: time="2025-08-13T09:04:08.721259291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:04:08.721915 containerd[1509]: time="2025-08-13T09:04:08.721465319Z" level=info msg="RemovePodSandbox \"76ab6a2d4c787c69d42df47e0cf56ebc3241dcc6d4eab8c4db281ec038d31f12\" returns successfully" Aug 13 09:04:08.723478 containerd[1509]: time="2025-08-13T09:04:08.723236367Z" level=info msg="StopPodSandbox for \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\"" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.775 [WARNING][5091] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89", Pod:"calico-apiserver-5c795f975d-z5l2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b0f5fa0500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.775 [INFO][5091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.775 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" iface="eth0" netns="" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.775 [INFO][5091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.775 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.807 [INFO][5099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.807 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.807 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.817 [WARNING][5099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.818 [INFO][5099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.820 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:08.824857 containerd[1509]: 2025-08-13 09:04:08.822 [INFO][5091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:08.824857 containerd[1509]: time="2025-08-13T09:04:08.824822556Z" level=info msg="TearDown network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\" successfully" Aug 13 09:04:08.827540 containerd[1509]: time="2025-08-13T09:04:08.824868893Z" level=info msg="StopPodSandbox for \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\" returns successfully" Aug 13 09:04:08.827540 containerd[1509]: time="2025-08-13T09:04:08.827337425Z" level=info msg="RemovePodSandbox for \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\"" Aug 13 09:04:08.827540 containerd[1509]: time="2025-08-13T09:04:08.827417384Z" level=info msg="Forcibly stopping sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\"" Aug 13 09:04:08.875403 kubelet[2681]: I0813 09:04:08.873846 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-v76x2" podStartSLOduration=32.357577462 podStartE2EDuration="38.873779007s" podCreationTimestamp="2025-08-13 09:03:30 +0000 UTC" firstStartedPulling="2025-08-13 09:04:01.071739671 +0000 UTC m=+54.061755062" lastFinishedPulling="2025-08-13 09:04:07.587941211 +0000 UTC m=+60.577956607" observedRunningTime="2025-08-13 09:04:08.87163848 +0000 UTC m=+61.861653878" watchObservedRunningTime="2025-08-13 09:04:08.873779007 +0000 UTC m=+61.863794428" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:08.942 [WARNING][5113] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0", GenerateName:"calico-apiserver-5c795f975d-", Namespace:"calico-apiserver", SelfLink:"", UID:"16bc31e7-bfd7-43e8-b302-ea6efb7b3ff4", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c795f975d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89", Pod:"calico-apiserver-5c795f975d-z5l2b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b0f5fa0500", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:08.942 [INFO][5113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:08.942 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" iface="eth0" netns="" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:08.942 [INFO][5113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:08.942 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:08.999 [INFO][5138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:09.000 [INFO][5138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:09.000 [INFO][5138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:09.013 [WARNING][5138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:09.013 [INFO][5138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" HandleID="k8s-pod-network.5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--apiserver--5c795f975d--z5l2b-eth0" Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:09.016 [INFO][5138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.021898 containerd[1509]: 2025-08-13 09:04:09.019 [INFO][5113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256" Aug 13 09:04:09.023187 containerd[1509]: time="2025-08-13T09:04:09.021967678Z" level=info msg="TearDown network for sandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\" successfully" Aug 13 09:04:09.028327 containerd[1509]: time="2025-08-13T09:04:09.028160756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:04:09.028460 containerd[1509]: time="2025-08-13T09:04:09.028344313Z" level=info msg="RemovePodSandbox \"5750cf8105301f9c9a3f9017e1633bd01e4e12e2633b7bde855fef2fdabac256\" returns successfully" Aug 13 09:04:09.029357 containerd[1509]: time="2025-08-13T09:04:09.029311876Z" level=info msg="StopPodSandbox for \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\"" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.101 [WARNING][5154] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"acc500b1-7473-42bd-b48d-00d555107b78", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9", Pod:"csi-node-driver-g96xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33a101dbcc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.101 [INFO][5154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.101 [INFO][5154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" iface="eth0" netns="" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.102 [INFO][5154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.102 [INFO][5154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.147 [INFO][5165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.147 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.147 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.159 [WARNING][5165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.159 [INFO][5165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.161 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.167108 containerd[1509]: 2025-08-13 09:04:09.164 [INFO][5154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.167868 containerd[1509]: time="2025-08-13T09:04:09.167191680Z" level=info msg="TearDown network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\" successfully" Aug 13 09:04:09.167868 containerd[1509]: time="2025-08-13T09:04:09.167247729Z" level=info msg="StopPodSandbox for \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\" returns successfully" Aug 13 09:04:09.168996 containerd[1509]: time="2025-08-13T09:04:09.168929946Z" level=info msg="RemovePodSandbox for \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\"" Aug 13 09:04:09.169135 containerd[1509]: time="2025-08-13T09:04:09.168995198Z" level=info msg="Forcibly stopping sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\"" Aug 13 09:04:09.229661 containerd[1509]: time="2025-08-13T09:04:09.229337146Z" level=info msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\"" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.238 [WARNING][5180] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"acc500b1-7473-42bd-b48d-00d555107b78", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9", Pod:"csi-node-driver-g96xv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33a101dbcc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.239 [INFO][5180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.239 [INFO][5180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" iface="eth0" netns="" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.239 [INFO][5180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.239 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.288 [INFO][5193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.289 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.289 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.302 [WARNING][5193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.302 [INFO][5193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" HandleID="k8s-pod-network.c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Workload="srv--cz57v.gb1.brightbox.com-k8s-csi--node--driver--g96xv-eth0" Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.304 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.314280 containerd[1509]: 2025-08-13 09:04:09.307 [INFO][5180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d" Aug 13 09:04:09.314280 containerd[1509]: time="2025-08-13T09:04:09.314258872Z" level=info msg="TearDown network for sandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\" successfully" Aug 13 09:04:09.322055 containerd[1509]: time="2025-08-13T09:04:09.321763414Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:04:09.322055 containerd[1509]: time="2025-08-13T09:04:09.321866055Z" level=info msg="RemovePodSandbox \"c62e45450c579c354551081dfdaf2a4c7bb958f562a242c765c10cb612e44f5d\" returns successfully" Aug 13 09:04:09.323043 containerd[1509]: time="2025-08-13T09:04:09.323007262Z" level=info msg="StopPodSandbox for \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\"" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.348 [INFO][5197] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.350 [INFO][5197] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" iface="eth0" netns="/var/run/netns/cni-817de573-e168-28c1-a7b8-0ac1affe3914" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.351 [INFO][5197] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" iface="eth0" netns="/var/run/netns/cni-817de573-e168-28c1-a7b8-0ac1affe3914" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.351 [INFO][5197] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" iface="eth0" netns="/var/run/netns/cni-817de573-e168-28c1-a7b8-0ac1affe3914" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.351 [INFO][5197] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.351 [INFO][5197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.390 [INFO][5222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.390 [INFO][5222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.390 [INFO][5222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.403 [WARNING][5222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.403 [INFO][5222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.406 [INFO][5222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.414318 containerd[1509]: 2025-08-13 09:04:09.411 [INFO][5197] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:04:09.417894 containerd[1509]: time="2025-08-13T09:04:09.415119486Z" level=info msg="TearDown network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" successfully" Aug 13 09:04:09.417894 containerd[1509]: time="2025-08-13T09:04:09.417217676Z" level=info msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" returns successfully" Aug 13 09:04:09.419964 systemd[1]: run-netns-cni\x2d817de573\x2de168\x2d28c1\x2da7b8\x2d0ac1affe3914.mount: Deactivated successfully. Aug 13 09:04:09.422493 containerd[1509]: time="2025-08-13T09:04:09.421436680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588668cdb7-gwwpt,Uid:4fef4dc0-3721-4011-811d-4e894024c3f2,Namespace:calico-system,Attempt:1,}" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.410 [WARNING][5217] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58f120ab-dea2-40e9-9eb7-22d71f6b8425", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1", Pod:"coredns-668d6bf9bc-wpdgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5f22b124a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.411 [INFO][5217] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.411 [INFO][5217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" iface="eth0" netns="" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.411 [INFO][5217] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.411 [INFO][5217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.479 [INFO][5232] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.480 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.480 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.494 [WARNING][5232] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.495 [INFO][5232] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.497 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.505255 containerd[1509]: 2025-08-13 09:04:09.501 [INFO][5217] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.506417 containerd[1509]: time="2025-08-13T09:04:09.505329468Z" level=info msg="TearDown network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\" successfully" Aug 13 09:04:09.506417 containerd[1509]: time="2025-08-13T09:04:09.505429355Z" level=info msg="StopPodSandbox for \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\" returns successfully" Aug 13 09:04:09.506510 containerd[1509]: time="2025-08-13T09:04:09.506406061Z" level=info msg="RemovePodSandbox for \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\"" Aug 13 09:04:09.506510 containerd[1509]: time="2025-08-13T09:04:09.506445138Z" level=info msg="Forcibly stopping sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\"" Aug 13 09:04:09.671039 systemd-networkd[1428]: cali0e2adb4b95d: Link UP Aug 13 09:04:09.674329 systemd-networkd[1428]: cali0e2adb4b95d: Gained carrier Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.586 [WARNING][5258] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"58f120ab-dea2-40e9-9eb7-22d71f6b8425", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"1fad944df7dd5af59af9a60a15321833039d994fe1c4dc80d48d1e1ad1f444c1", Pod:"coredns-668d6bf9bc-wpdgx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5f22b124a8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.587 [INFO][5258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.587 [INFO][5258] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" iface="eth0" netns="" Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.587 [INFO][5258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.587 [INFO][5258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.645 [INFO][5274] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.645 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.659 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.682 [WARNING][5274] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.682 [INFO][5274] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" HandleID="k8s-pod-network.f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Workload="srv--cz57v.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wpdgx-eth0" Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.694 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.713203 containerd[1509]: 2025-08-13 09:04:09.705 [INFO][5258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267" Aug 13 09:04:09.713203 containerd[1509]: time="2025-08-13T09:04:09.712971998Z" level=info msg="TearDown network for sandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\" successfully" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.520 [INFO][5236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0 calico-kube-controllers-588668cdb7- calico-system 4fef4dc0-3721-4011-811d-4e894024c3f2 1038 0 2025-08-13 09:03:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:588668cdb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-cz57v.gb1.brightbox.com calico-kube-controllers-588668cdb7-gwwpt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0e2adb4b95d [] [] }} ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.520 [INFO][5236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.578 [INFO][5264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" HandleID="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.579 [INFO][5264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" HandleID="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-cz57v.gb1.brightbox.com", "pod":"calico-kube-controllers-588668cdb7-gwwpt", "timestamp":"2025-08-13 09:04:09.578778097 +0000 UTC"}, Hostname:"srv-cz57v.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.579 [INFO][5264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.579 [INFO][5264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.579 [INFO][5264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-cz57v.gb1.brightbox.com' Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.595 [INFO][5264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.612 [INFO][5264] ipam/ipam.go 394: Looking up existing affinities for host host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.624 [INFO][5264] ipam/ipam.go 511: Trying affinity for 192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.629 [INFO][5264] ipam/ipam.go 158: Attempting to load block cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.637 [INFO][5264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.114.192/26 host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.637 [INFO][5264] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.114.192/26 handle="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.639 [INFO][5264] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3 Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.648 [INFO][5264] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.114.192/26 handle="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.657 [INFO][5264] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.114.200/26] block=192.168.114.192/26 handle="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.657 [INFO][5264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.114.200/26] handle="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" host="srv-cz57v.gb1.brightbox.com" Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.657 [INFO][5264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.717107 containerd[1509]: 2025-08-13 09:04:09.657 [INFO][5264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.200/26] IPv6=[] ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" HandleID="k8s-pod-network.eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.721586 containerd[1509]: 2025-08-13 09:04:09.663 [INFO][5236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0", GenerateName:"calico-kube-controllers-588668cdb7-", Namespace:"calico-system", SelfLink:"", UID:"4fef4dc0-3721-4011-811d-4e894024c3f2", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588668cdb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-588668cdb7-gwwpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e2adb4b95d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:09.721586 containerd[1509]: 2025-08-13 09:04:09.664 [INFO][5236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.200/32] ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.721586 containerd[1509]: 2025-08-13 09:04:09.664 [INFO][5236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e2adb4b95d ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.721586 containerd[1509]: 2025-08-13 09:04:09.676 [INFO][5236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.721586 containerd[1509]: 2025-08-13 09:04:09.677 [INFO][5236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0", GenerateName:"calico-kube-controllers-588668cdb7-", Namespace:"calico-system", SelfLink:"", UID:"4fef4dc0-3721-4011-811d-4e894024c3f2", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588668cdb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3", Pod:"calico-kube-controllers-588668cdb7-gwwpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e2adb4b95d", MAC:"6e:cc:83:a4:ec:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:04:09.721586 containerd[1509]: 2025-08-13 09:04:09.701 [INFO][5236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3" Namespace="calico-system" Pod="calico-kube-controllers-588668cdb7-gwwpt" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:04:09.725119 containerd[1509]: time="2025-08-13T09:04:09.725032481Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:04:09.725207 containerd[1509]: time="2025-08-13T09:04:09.725138195Z" level=info msg="RemovePodSandbox \"f91ca678a82743c08427cfb618f9619c2b1c04b7fcf04df7369201393594e267\" returns successfully" Aug 13 09:04:09.726972 containerd[1509]: time="2025-08-13T09:04:09.726935493Z" level=info msg="StopPodSandbox for \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\"" Aug 13 09:04:09.767100 containerd[1509]: time="2025-08-13T09:04:09.766630219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 09:04:09.767100 containerd[1509]: time="2025-08-13T09:04:09.766772040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 09:04:09.767100 containerd[1509]: time="2025-08-13T09:04:09.766806225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:09.770049 containerd[1509]: time="2025-08-13T09:04:09.769416903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 09:04:09.809290 systemd[1]: Started cri-containerd-eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3.scope - libcontainer container eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3. Aug 13 09:04:09.957483 containerd[1509]: time="2025-08-13T09:04:09.955210538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-588668cdb7-gwwpt,Uid:4fef4dc0-3721-4011-811d-4e894024c3f2,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3\"" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.835 [WARNING][5307] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.835 [INFO][5307] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.835 [INFO][5307] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" iface="eth0" netns="" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.835 [INFO][5307] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.835 [INFO][5307] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.922 [INFO][5340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.922 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.922 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.948 [WARNING][5340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.948 [INFO][5340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.950 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:09.961546 containerd[1509]: 2025-08-13 09:04:09.958 [INFO][5307] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:09.962468 containerd[1509]: time="2025-08-13T09:04:09.961589731Z" level=info msg="TearDown network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\" successfully" Aug 13 09:04:09.962468 containerd[1509]: time="2025-08-13T09:04:09.961613040Z" level=info msg="StopPodSandbox for \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\" returns successfully" Aug 13 09:04:09.962468 containerd[1509]: time="2025-08-13T09:04:09.962034251Z" level=info msg="RemovePodSandbox for \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\"" Aug 13 09:04:09.962468 containerd[1509]: time="2025-08-13T09:04:09.962065451Z" level=info msg="Forcibly stopping sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\"" Aug 13 09:04:10.002644 sshd[4204]: PAM: Permission denied for root from 121.127.231.238 Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.035 [WARNING][5378] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" WorkloadEndpoint="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.035 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.035 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" iface="eth0" netns="" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.035 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.035 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.095 [INFO][5389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.096 [INFO][5389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.096 [INFO][5389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.106 [WARNING][5389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.106 [INFO][5389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" HandleID="k8s-pod-network.84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Workload="srv--cz57v.gb1.brightbox.com-k8s-whisker--6c4cd9987c--7l522-eth0" Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.108 [INFO][5389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:04:10.114768 containerd[1509]: 2025-08-13 09:04:10.110 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff" Aug 13 09:04:10.115864 containerd[1509]: time="2025-08-13T09:04:10.114822200Z" level=info msg="TearDown network for sandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\" successfully" Aug 13 09:04:10.119437 containerd[1509]: time="2025-08-13T09:04:10.119386160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:04:10.119530 containerd[1509]: time="2025-08-13T09:04:10.119454869Z" level=info msg="RemovePodSandbox \"84ae8e1bce0a76f1f43d119f11d3d0892f67d493d598c303645b6d25db5491ff\" returns successfully" Aug 13 09:04:10.209471 sshd[4204]: Received disconnect from 121.127.231.238 port 17982:11: [preauth] Aug 13 09:04:10.209471 sshd[4204]: Disconnected from authenticating user root 121.127.231.238 port 17982 [preauth] Aug 13 09:04:10.221910 systemd[1]: sshd@11-10.230.18.154:22-121.127.231.238:17982.service: Deactivated successfully. Aug 13 09:04:10.765612 containerd[1509]: time="2025-08-13T09:04:10.765296087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:10.767185 containerd[1509]: time="2025-08-13T09:04:10.767085217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 09:04:10.771292 containerd[1509]: time="2025-08-13T09:04:10.771185754Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:10.775769 containerd[1509]: time="2025-08-13T09:04:10.775680275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:10.777186 containerd[1509]: time="2025-08-13T09:04:10.776784626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 3.173481246s" Aug 13 09:04:10.777186 containerd[1509]: time="2025-08-13T09:04:10.776844523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 09:04:10.779398 containerd[1509]: time="2025-08-13T09:04:10.779321774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 09:04:10.783430 containerd[1509]: time="2025-08-13T09:04:10.782609854Z" level=info msg="CreateContainer within sandbox \"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 09:04:10.813028 containerd[1509]: time="2025-08-13T09:04:10.812868276Z" level=info msg="CreateContainer within sandbox \"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2cab211bb40b8bcf884f7e7c426cf1259757dbfcfa6d6d78ae259b13b0ed8e0f\"" Aug 13 09:04:10.815183 containerd[1509]: time="2025-08-13T09:04:10.814693912Z" level=info msg="StartContainer for \"2cab211bb40b8bcf884f7e7c426cf1259757dbfcfa6d6d78ae259b13b0ed8e0f\"" Aug 13 09:04:10.871448 systemd[1]: Started cri-containerd-2cab211bb40b8bcf884f7e7c426cf1259757dbfcfa6d6d78ae259b13b0ed8e0f.scope - libcontainer container 2cab211bb40b8bcf884f7e7c426cf1259757dbfcfa6d6d78ae259b13b0ed8e0f. Aug 13 09:04:10.935909 containerd[1509]: time="2025-08-13T09:04:10.935653272Z" level=info msg="StartContainer for \"2cab211bb40b8bcf884f7e7c426cf1259757dbfcfa6d6d78ae259b13b0ed8e0f\" returns successfully" Aug 13 09:04:11.505501 systemd-networkd[1428]: cali0e2adb4b95d: Gained IPv6LL Aug 13 09:04:15.678334 containerd[1509]: time="2025-08-13T09:04:15.678194144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:15.680451 containerd[1509]: time="2025-08-13T09:04:15.680378890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 09:04:15.681245 containerd[1509]: time="2025-08-13T09:04:15.681171926Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:15.684321 containerd[1509]: time="2025-08-13T09:04:15.684269353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:15.686043 containerd[1509]: time="2025-08-13T09:04:15.685642586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 4.905642896s" Aug 13 09:04:15.686043 containerd[1509]: time="2025-08-13T09:04:15.685701512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 09:04:15.687975 containerd[1509]: time="2025-08-13T09:04:15.687882704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 09:04:15.690600 containerd[1509]: time="2025-08-13T09:04:15.690443882Z" level=info msg="CreateContainer within sandbox \"a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 09:04:15.713277 containerd[1509]: time="2025-08-13T09:04:15.713216948Z" level=info msg="CreateContainer within sandbox \"a3cd6447829e40c03a9e69ef6acb4046862c03aa19db91e409cf6129e2e21210\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"21cf2abde7db59364666095d71b065e5b6d28ff71408642a86a61c1ac0798d30\"" Aug 13 09:04:15.715964 containerd[1509]: time="2025-08-13T09:04:15.715789229Z" level=info msg="StartContainer for \"21cf2abde7db59364666095d71b065e5b6d28ff71408642a86a61c1ac0798d30\"" Aug 13 09:04:15.835351 systemd[1]: Started cri-containerd-21cf2abde7db59364666095d71b065e5b6d28ff71408642a86a61c1ac0798d30.scope - libcontainer container 21cf2abde7db59364666095d71b065e5b6d28ff71408642a86a61c1ac0798d30. Aug 13 09:04:15.902743 containerd[1509]: time="2025-08-13T09:04:15.902500618Z" level=info msg="StartContainer for \"21cf2abde7db59364666095d71b065e5b6d28ff71408642a86a61c1ac0798d30\" returns successfully" Aug 13 09:04:15.941086 kubelet[2681]: I0813 09:04:15.940802 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c795f975d-rzm28" podStartSLOduration=36.964079996 podStartE2EDuration="49.940713323s" podCreationTimestamp="2025-08-13 09:03:26 +0000 UTC" firstStartedPulling="2025-08-13 09:04:02.710707202 +0000 UTC m=+55.700722594" lastFinishedPulling="2025-08-13 09:04:15.687340516 +0000 UTC m=+68.677355921" observedRunningTime="2025-08-13 09:04:15.938641041 +0000 UTC m=+68.928656444" watchObservedRunningTime="2025-08-13 09:04:15.940713323 +0000 UTC m=+68.930728727" Aug 13 09:04:17.595224 containerd[1509]: time="2025-08-13T09:04:17.594844226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:17.597419 containerd[1509]: time="2025-08-13T09:04:17.596924716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 09:04:17.599108 containerd[1509]: time="2025-08-13T09:04:17.597761124Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:17.603394 containerd[1509]: time="2025-08-13T09:04:17.603353334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:17.606634 containerd[1509]: time="2025-08-13T09:04:17.606588710Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.91865778s" Aug 13 09:04:17.606732 containerd[1509]: time="2025-08-13T09:04:17.606645025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 09:04:17.609274 containerd[1509]: time="2025-08-13T09:04:17.609229891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 09:04:17.610592 containerd[1509]: time="2025-08-13T09:04:17.610537855Z" level=info msg="CreateContainer within sandbox \"ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 09:04:17.637055 containerd[1509]: time="2025-08-13T09:04:17.636811863Z" level=info msg="CreateContainer within sandbox \"ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"aefc51b23b0be15d5e364953e66b8a94a0c9698fd8270f2513167c73b366eacf\"" Aug 13 09:04:17.640972 containerd[1509]: time="2025-08-13T09:04:17.640934142Z" level=info msg="StartContainer for \"aefc51b23b0be15d5e364953e66b8a94a0c9698fd8270f2513167c73b366eacf\"" Aug 13 09:04:17.723304 systemd[1]: Started cri-containerd-aefc51b23b0be15d5e364953e66b8a94a0c9698fd8270f2513167c73b366eacf.scope - libcontainer container aefc51b23b0be15d5e364953e66b8a94a0c9698fd8270f2513167c73b366eacf. Aug 13 09:04:17.801455 containerd[1509]: time="2025-08-13T09:04:17.801389942Z" level=info msg="StartContainer for \"aefc51b23b0be15d5e364953e66b8a94a0c9698fd8270f2513167c73b366eacf\" returns successfully" Aug 13 09:04:18.610273 containerd[1509]: time="2025-08-13T09:04:18.610206871Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:18.611430 containerd[1509]: time="2025-08-13T09:04:18.611143244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 09:04:18.614148 containerd[1509]: time="2025-08-13T09:04:18.614102011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.004802625s" Aug 13 09:04:18.614869 containerd[1509]: time="2025-08-13T09:04:18.614415302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 09:04:18.615965 containerd[1509]: time="2025-08-13T09:04:18.615916852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 09:04:18.635360 containerd[1509]: time="2025-08-13T09:04:18.635179701Z" level=info msg="CreateContainer within sandbox \"197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 09:04:18.676214 containerd[1509]: time="2025-08-13T09:04:18.675222248Z" level=info msg="CreateContainer within sandbox \"197efb43b8526415e059cd00ed053d10c16d7668d8b3e2b42c49108b864ade89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0c71344b57747650239882cd8ec2544e42656ec82363ab32bb090efc9cf261e8\"" Aug 13 09:04:18.678017 containerd[1509]: time="2025-08-13T09:04:18.677205595Z" level=info msg="StartContainer for \"0c71344b57747650239882cd8ec2544e42656ec82363ab32bb090efc9cf261e8\"" Aug 13 09:04:18.737031 systemd[1]: run-containerd-runc-k8s.io-0c71344b57747650239882cd8ec2544e42656ec82363ab32bb090efc9cf261e8-runc.bkjs8x.mount: Deactivated successfully. Aug 13 09:04:18.746322 systemd[1]: Started cri-containerd-0c71344b57747650239882cd8ec2544e42656ec82363ab32bb090efc9cf261e8.scope - libcontainer container 0c71344b57747650239882cd8ec2544e42656ec82363ab32bb090efc9cf261e8. Aug 13 09:04:18.811989 containerd[1509]: time="2025-08-13T09:04:18.811846974Z" level=info msg="StartContainer for \"0c71344b57747650239882cd8ec2544e42656ec82363ab32bb090efc9cf261e8\" returns successfully" Aug 13 09:04:18.954559 kubelet[2681]: I0813 09:04:18.954311 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c795f975d-z5l2b" podStartSLOduration=38.103792354 podStartE2EDuration="52.954234245s" podCreationTimestamp="2025-08-13 09:03:26 +0000 UTC" firstStartedPulling="2025-08-13 09:04:03.765266012 +0000 UTC m=+56.755281404" lastFinishedPulling="2025-08-13 09:04:18.615707904 +0000 UTC m=+71.605723295" observedRunningTime="2025-08-13 09:04:18.951514162 +0000 UTC m=+71.941529571" watchObservedRunningTime="2025-08-13 09:04:18.954234245 +0000 UTC m=+71.944249649" Aug 13 09:04:19.937037 kubelet[2681]: I0813 09:04:19.936359 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 09:04:26.789188 containerd[1509]: time="2025-08-13T09:04:26.788648862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:26.801892 containerd[1509]: time="2025-08-13T09:04:26.791379677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 09:04:26.801892 containerd[1509]: time="2025-08-13T09:04:26.796119406Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:26.807441 containerd[1509]: time="2025-08-13T09:04:26.807394034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:26.809684 containerd[1509]: time="2025-08-13T09:04:26.809531900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 8.193453366s" Aug 13 09:04:26.809955 containerd[1509]: time="2025-08-13T09:04:26.809801517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 09:04:26.850753 containerd[1509]: time="2025-08-13T09:04:26.850327293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 09:04:27.056570 containerd[1509]: time="2025-08-13T09:04:27.056255226Z" level=info msg="CreateContainer within sandbox \"eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 09:04:27.121655 containerd[1509]: time="2025-08-13T09:04:27.121461903Z" level=info msg="CreateContainer within sandbox \"eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"482cdaf8d123152ce0d3d3f23c5b43750da9a106bd4027f4524c02870aa3aab4\"" Aug 13 09:04:27.151097 containerd[1509]: time="2025-08-13T09:04:27.149335683Z" level=info msg="StartContainer for \"482cdaf8d123152ce0d3d3f23c5b43750da9a106bd4027f4524c02870aa3aab4\"" Aug 13 09:04:27.372049 systemd[1]: Started cri-containerd-482cdaf8d123152ce0d3d3f23c5b43750da9a106bd4027f4524c02870aa3aab4.scope - libcontainer container 482cdaf8d123152ce0d3d3f23c5b43750da9a106bd4027f4524c02870aa3aab4. Aug 13 09:04:27.522692 containerd[1509]: time="2025-08-13T09:04:27.521056565Z" level=info msg="StartContainer for \"482cdaf8d123152ce0d3d3f23c5b43750da9a106bd4027f4524c02870aa3aab4\" returns successfully" Aug 13 09:04:28.676103 kubelet[2681]: I0813 09:04:28.666297 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-588668cdb7-gwwpt" podStartSLOduration=40.781494235 podStartE2EDuration="57.661946288s" podCreationTimestamp="2025-08-13 09:03:31 +0000 UTC" firstStartedPulling="2025-08-13 09:04:09.959608377 +0000 UTC m=+62.949623769" lastFinishedPulling="2025-08-13 09:04:26.840060426 +0000 UTC m=+79.830075822" observedRunningTime="2025-08-13 09:04:28.374494949 +0000 UTC m=+81.364510353" watchObservedRunningTime="2025-08-13 09:04:28.661946288 +0000 UTC m=+81.651961695" Aug 13 09:04:29.410105 containerd[1509]: time="2025-08-13T09:04:29.408745125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:29.411291 containerd[1509]: time="2025-08-13T09:04:29.410040070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 09:04:29.411291 containerd[1509]: time="2025-08-13T09:04:29.410873692Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:29.414196 containerd[1509]: time="2025-08-13T09:04:29.414160210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:29.415726 containerd[1509]: time="2025-08-13T09:04:29.415331857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.564931214s" Aug 13 09:04:29.415726 containerd[1509]: time="2025-08-13T09:04:29.415387336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 09:04:29.418147 containerd[1509]: time="2025-08-13T09:04:29.418020831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 09:04:29.426329 containerd[1509]: time="2025-08-13T09:04:29.426279594Z" level=info msg="CreateContainer within sandbox \"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 09:04:29.458435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126848218.mount: Deactivated successfully. Aug 13 09:04:29.461416 containerd[1509]: time="2025-08-13T09:04:29.459368658Z" level=info msg="CreateContainer within sandbox \"e7f9fe99ba8a34beeb245cd141a04b916aee35da76e81c386c277e31fba90ee9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"66150a946d945fd8df58fbbc4c4a81de36e68d821a84b804981e877544b1a200\"" Aug 13 09:04:29.465248 containerd[1509]: time="2025-08-13T09:04:29.464293828Z" level=info msg="StartContainer for \"66150a946d945fd8df58fbbc4c4a81de36e68d821a84b804981e877544b1a200\"" Aug 13 09:04:29.563692 systemd[1]: Started cri-containerd-66150a946d945fd8df58fbbc4c4a81de36e68d821a84b804981e877544b1a200.scope - libcontainer container 66150a946d945fd8df58fbbc4c4a81de36e68d821a84b804981e877544b1a200. Aug 13 09:04:29.625608 containerd[1509]: time="2025-08-13T09:04:29.625510577Z" level=info msg="StartContainer for \"66150a946d945fd8df58fbbc4c4a81de36e68d821a84b804981e877544b1a200\" returns successfully" Aug 13 09:04:30.368880 kubelet[2681]: I0813 09:04:30.367908 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g96xv" podStartSLOduration=32.371342381 podStartE2EDuration="59.367162857s" podCreationTimestamp="2025-08-13 09:03:31 +0000 UTC" firstStartedPulling="2025-08-13 09:04:02.421918728 +0000 UTC m=+55.411934120" lastFinishedPulling="2025-08-13 09:04:29.4177392 +0000 UTC m=+82.407754596" observedRunningTime="2025-08-13 09:04:30.358834863 +0000 UTC m=+83.348850274" watchObservedRunningTime="2025-08-13 09:04:30.367162857 +0000 UTC m=+83.357178254" Aug 13 09:04:30.515880 kubelet[2681]: I0813 09:04:30.509141 2681 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 09:04:30.517286 kubelet[2681]: I0813 09:04:30.516775 2681 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 09:04:32.451952 update_engine[1493]: I20250813 09:04:32.451634 1493 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 09:04:32.457166 update_engine[1493]: I20250813 09:04:32.453691 1493 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 09:04:32.457166 update_engine[1493]: I20250813 09:04:32.456560 1493 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 09:04:32.459748 update_engine[1493]: I20250813 09:04:32.459584 1493 omaha_request_params.cc:62] Current group set to lts Aug 13 09:04:32.461122 update_engine[1493]: I20250813 09:04:32.460983 1493 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 09:04:32.461122 update_engine[1493]: I20250813 09:04:32.461038 1493 update_attempter.cc:643] Scheduling an action processor start. Aug 13 09:04:32.462561 update_engine[1493]: I20250813 09:04:32.461230 1493 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 09:04:32.462561 update_engine[1493]: I20250813 09:04:32.461586 1493 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 09:04:32.462561 update_engine[1493]: I20250813 09:04:32.461697 1493 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 09:04:32.462561 update_engine[1493]: I20250813 09:04:32.461726 1493 omaha_request_action.cc:272] Request: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: Aug 13 09:04:32.462561 update_engine[1493]: I20250813 09:04:32.461820 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 09:04:32.493499 update_engine[1493]: I20250813 09:04:32.492733 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 09:04:32.493499 update_engine[1493]: I20250813 09:04:32.493345 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 09:04:32.514237 update_engine[1493]: E20250813 09:04:32.513952 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 09:04:32.514237 update_engine[1493]: I20250813 09:04:32.514170 1493 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 09:04:32.523408 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 09:04:35.222974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658367880.mount: Deactivated successfully. Aug 13 09:04:35.335582 containerd[1509]: time="2025-08-13T09:04:35.335478877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:35.338547 containerd[1509]: time="2025-08-13T09:04:35.338200435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 09:04:35.343102 containerd[1509]: time="2025-08-13T09:04:35.340653484Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:35.387629 containerd[1509]: time="2025-08-13T09:04:35.387560846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 09:04:35.389311 containerd[1509]: time="2025-08-13T09:04:35.389134979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 5.971051674s" Aug 13 09:04:35.389311 containerd[1509]: time="2025-08-13T09:04:35.389198864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 09:04:35.434596 containerd[1509]: time="2025-08-13T09:04:35.434540827Z" level=info msg="CreateContainer within sandbox \"ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 09:04:35.474243 containerd[1509]: time="2025-08-13T09:04:35.473682062Z" level=info msg="CreateContainer within sandbox \"ae42512f600a2bd81dc5e6e74da7f5ab47fe62ea9f04c089f4efbc2a6f7c1091\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"87fde501157d7058e5cd72d33d4423c1dec37f8a31d01aa9f29c07b2d30d6f38\"" Aug 13 09:04:35.475334 containerd[1509]: time="2025-08-13T09:04:35.475234258Z" level=info msg="StartContainer for \"87fde501157d7058e5cd72d33d4423c1dec37f8a31d01aa9f29c07b2d30d6f38\"" Aug 13 09:04:35.713658 systemd[1]: Started cri-containerd-87fde501157d7058e5cd72d33d4423c1dec37f8a31d01aa9f29c07b2d30d6f38.scope - libcontainer container 87fde501157d7058e5cd72d33d4423c1dec37f8a31d01aa9f29c07b2d30d6f38. Aug 13 09:04:35.927287 containerd[1509]: time="2025-08-13T09:04:35.925341026Z" level=info msg="StartContainer for \"87fde501157d7058e5cd72d33d4423c1dec37f8a31d01aa9f29c07b2d30d6f38\" returns successfully" Aug 13 09:04:36.723690 kubelet[2681]: I0813 09:04:36.723516 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f659df8cf-rs4mw" podStartSLOduration=4.479303109 podStartE2EDuration="36.714457784s" podCreationTimestamp="2025-08-13 09:04:00 +0000 UTC" firstStartedPulling="2025-08-13 09:04:03.16768321 +0000 UTC m=+56.157698607" lastFinishedPulling="2025-08-13 09:04:35.402837885 +0000 UTC m=+88.392853282" observedRunningTime="2025-08-13 09:04:36.695409006 +0000 UTC m=+89.685424412" watchObservedRunningTime="2025-08-13 09:04:36.714457784 +0000 UTC m=+89.704473181" Aug 13 09:04:41.017731 systemd[1]: Started sshd@12-10.230.18.154:22-139.178.68.195:45626.service - OpenSSH per-connection server daemon (139.178.68.195:45626). Aug 13 09:04:42.067784 sshd[5821]: Accepted publickey for core from 139.178.68.195 port 45626 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:04:42.073680 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:04:42.097266 systemd-logind[1492]: New session 12 of user core. Aug 13 09:04:42.102224 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 09:04:42.379149 update_engine[1493]: I20250813 09:04:42.374190 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 09:04:42.379149 update_engine[1493]: I20250813 09:04:42.378225 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 09:04:42.379149 update_engine[1493]: I20250813 09:04:42.378734 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 09:04:42.380868 update_engine[1493]: E20250813 09:04:42.380724 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 09:04:42.380868 update_engine[1493]: I20250813 09:04:42.380815 1493 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 09:04:43.573327 sshd[5821]: pam_unix(sshd:session): session closed for user core Aug 13 09:04:43.585386 systemd-logind[1492]: Session 12 logged out. Waiting for processes to exit. Aug 13 09:04:43.603720 systemd[1]: sshd@12-10.230.18.154:22-139.178.68.195:45626.service: Deactivated successfully. Aug 13 09:04:43.609972 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 09:04:43.614867 systemd-logind[1492]: Removed session 12. Aug 13 09:04:44.559183 kubelet[2681]: I0813 09:04:44.549034 2681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 09:04:48.757926 systemd[1]: Started sshd@13-10.230.18.154:22-139.178.68.195:45638.service - OpenSSH per-connection server daemon (139.178.68.195:45638). Aug 13 09:04:49.785907 sshd[5847]: Accepted publickey for core from 139.178.68.195 port 45638 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:04:49.790707 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:04:49.805368 systemd-logind[1492]: New session 13 of user core. Aug 13 09:04:49.814357 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 09:04:51.243300 sshd[5847]: pam_unix(sshd:session): session closed for user core Aug 13 09:04:51.255708 systemd[1]: sshd@13-10.230.18.154:22-139.178.68.195:45638.service: Deactivated successfully. Aug 13 09:04:51.259952 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 09:04:51.261957 systemd-logind[1492]: Session 13 logged out. Waiting for processes to exit. Aug 13 09:04:51.265065 systemd-logind[1492]: Removed session 13. Aug 13 09:04:52.365149 update_engine[1493]: I20250813 09:04:52.364223 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 09:04:52.366516 update_engine[1493]: I20250813 09:04:52.365047 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 09:04:52.366627 update_engine[1493]: I20250813 09:04:52.366591 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 09:04:52.367097 update_engine[1493]: E20250813 09:04:52.367031 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 09:04:52.367175 update_engine[1493]: I20250813 09:04:52.367142 1493 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 09:04:56.433514 systemd[1]: Started sshd@14-10.230.18.154:22-139.178.68.195:58950.service - OpenSSH per-connection server daemon (139.178.68.195:58950). Aug 13 09:04:57.460237 sshd[5863]: Accepted publickey for core from 139.178.68.195 port 58950 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:04:57.463453 sshd[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:04:57.480598 systemd-logind[1492]: New session 14 of user core. Aug 13 09:04:57.487876 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 09:04:58.324927 sshd[5863]: pam_unix(sshd:session): session closed for user core Aug 13 09:04:58.334060 systemd[1]: sshd@14-10.230.18.154:22-139.178.68.195:58950.service: Deactivated successfully. Aug 13 09:04:58.340867 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 09:04:58.346565 systemd-logind[1492]: Session 14 logged out. Waiting for processes to exit. Aug 13 09:04:58.349601 systemd-logind[1492]: Removed session 14. Aug 13 09:04:58.403012 systemd[1]: run-containerd-runc-k8s.io-482cdaf8d123152ce0d3d3f23c5b43750da9a106bd4027f4524c02870aa3aab4-runc.AAcn3P.mount: Deactivated successfully. Aug 13 09:04:58.493278 systemd[1]: Started sshd@15-10.230.18.154:22-139.178.68.195:58958.service - OpenSSH per-connection server daemon (139.178.68.195:58958). Aug 13 09:04:58.531746 systemd[1]: Started sshd@16-10.230.18.154:22-8.137.38.94:38058.service - OpenSSH per-connection server daemon (8.137.38.94:38058). Aug 13 09:04:59.510654 sshd[5898]: Accepted publickey for core from 139.178.68.195 port 58958 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:04:59.513873 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:04:59.517115 sshd[5900]: Invalid user from 8.137.38.94 port 38058 Aug 13 09:04:59.525411 systemd-logind[1492]: New session 15 of user core. Aug 13 09:04:59.535488 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 09:05:00.545651 sshd[5898]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:00.552521 systemd-logind[1492]: Session 15 logged out. Waiting for processes to exit. Aug 13 09:05:00.554888 systemd[1]: sshd@15-10.230.18.154:22-139.178.68.195:58958.service: Deactivated successfully. Aug 13 09:05:00.561840 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 09:05:00.576323 systemd-logind[1492]: Removed session 15. Aug 13 09:05:00.703601 systemd[1]: Started sshd@17-10.230.18.154:22-139.178.68.195:35184.service - OpenSSH per-connection server daemon (139.178.68.195:35184). Aug 13 09:05:01.711903 sshd[5929]: Accepted publickey for core from 139.178.68.195 port 35184 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:01.720506 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:01.759785 systemd-logind[1492]: New session 16 of user core. Aug 13 09:05:01.771586 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 09:05:02.364826 update_engine[1493]: I20250813 09:05:02.364662 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 09:05:02.365560 update_engine[1493]: I20250813 09:05:02.365402 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 09:05:02.365928 update_engine[1493]: I20250813 09:05:02.365886 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 09:05:02.367133 update_engine[1493]: E20250813 09:05:02.367091 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 09:05:02.367249 update_engine[1493]: I20250813 09:05:02.367179 1493 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 09:05:02.372603 update_engine[1493]: I20250813 09:05:02.372023 1493 omaha_request_action.cc:617] Omaha request response: Aug 13 09:05:02.372775 update_engine[1493]: E20250813 09:05:02.372701 1493 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 09:05:02.430098 update_engine[1493]: I20250813 09:05:02.429613 1493 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 09:05:02.430098 update_engine[1493]: I20250813 09:05:02.429678 1493 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 09:05:02.430098 update_engine[1493]: I20250813 09:05:02.429692 1493 update_attempter.cc:306] Processing Done. Aug 13 09:05:02.430098 update_engine[1493]: E20250813 09:05:02.429737 1493 update_attempter.cc:619] Update failed. Aug 13 09:05:02.430098 update_engine[1493]: I20250813 09:05:02.429754 1493 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 09:05:02.430098 update_engine[1493]: I20250813 09:05:02.429773 1493 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 09:05:02.430098 update_engine[1493]: I20250813 09:05:02.429786 1493 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 09:05:02.454385 update_engine[1493]: I20250813 09:05:02.454311 1493 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 09:05:02.454586 update_engine[1493]: I20250813 09:05:02.454427 1493 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 09:05:02.454586 update_engine[1493]: I20250813 09:05:02.454447 1493 omaha_request_action.cc:272] Request: Aug 13 09:05:02.454586 update_engine[1493]: Aug 13 09:05:02.454586 update_engine[1493]: Aug 13 09:05:02.454586 update_engine[1493]: Aug 13 09:05:02.454586 update_engine[1493]: Aug 13 09:05:02.454586 update_engine[1493]: Aug 13 09:05:02.454586 update_engine[1493]: Aug 13 09:05:02.454586 update_engine[1493]: I20250813 09:05:02.454460 1493 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 09:05:02.455676 update_engine[1493]: I20250813 09:05:02.454785 1493 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 09:05:02.456727 update_engine[1493]: I20250813 09:05:02.456121 1493 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 09:05:02.457219 update_engine[1493]: E20250813 09:05:02.457167 1493 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 09:05:02.457308 update_engine[1493]: I20250813 09:05:02.457257 1493 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 09:05:02.457308 update_engine[1493]: I20250813 09:05:02.457278 1493 omaha_request_action.cc:617] Omaha request response: Aug 13 09:05:02.457308 update_engine[1493]: I20250813 09:05:02.457291 1493 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 09:05:02.459231 update_engine[1493]: I20250813 09:05:02.457301 1493 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 09:05:02.459231 update_engine[1493]: I20250813 09:05:02.457319 1493 update_attempter.cc:306] Processing Done. Aug 13 09:05:02.459231 update_engine[1493]: I20250813 09:05:02.457330 1493 update_attempter.cc:310] Error event sent. Aug 13 09:05:02.459231 update_engine[1493]: I20250813 09:05:02.457355 1493 update_check_scheduler.cc:74] Next update check in 45m26s Aug 13 09:05:02.497779 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 09:05:02.497779 locksmithd[1512]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 09:05:02.812259 sshd[5929]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:02.822764 systemd[1]: sshd@17-10.230.18.154:22-139.178.68.195:35184.service: Deactivated successfully. Aug 13 09:05:02.827810 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 09:05:02.830583 systemd-logind[1492]: Session 16 logged out. Waiting for processes to exit. Aug 13 09:05:02.833591 systemd-logind[1492]: Removed session 16. Aug 13 09:05:06.533226 sshd[5900]: Connection closed by invalid user 8.137.38.94 port 38058 [preauth] Aug 13 09:05:06.543358 systemd[1]: sshd@16-10.230.18.154:22-8.137.38.94:38058.service: Deactivated successfully. Aug 13 09:05:07.978271 systemd[1]: Started sshd@18-10.230.18.154:22-139.178.68.195:35198.service - OpenSSH per-connection server daemon (139.178.68.195:35198). Aug 13 09:05:09.004673 sshd[5958]: Accepted publickey for core from 139.178.68.195 port 35198 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:09.010894 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:09.025197 systemd-logind[1492]: New session 17 of user core. Aug 13 09:05:09.033343 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 09:05:10.536846 containerd[1509]: time="2025-08-13T09:05:10.429186446Z" level=info msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\"" Aug 13 09:05:10.756570 sshd[5958]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:10.783612 systemd[1]: sshd@18-10.230.18.154:22-139.178.68.195:35198.service: Deactivated successfully. Aug 13 09:05:10.794032 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 09:05:10.800047 systemd-logind[1492]: Session 17 logged out. Waiting for processes to exit. Aug 13 09:05:10.804113 systemd-logind[1492]: Removed session 17. Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.267 [WARNING][5998] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0", GenerateName:"calico-kube-controllers-588668cdb7-", Namespace:"calico-system", SelfLink:"", UID:"4fef4dc0-3721-4011-811d-4e894024c3f2", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588668cdb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3", Pod:"calico-kube-controllers-588668cdb7-gwwpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e2adb4b95d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.275 [INFO][5998] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.275 [INFO][5998] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" iface="eth0" netns="" Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.275 [INFO][5998] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.275 [INFO][5998] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.547 [INFO][6007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.551 [INFO][6007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.551 [INFO][6007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.587 [WARNING][6007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.589 [INFO][6007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.595 [INFO][6007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:05:11.602977 containerd[1509]: 2025-08-13 09:05:11.600 [INFO][5998] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.641530 containerd[1509]: time="2025-08-13T09:05:11.606115932Z" level=info msg="TearDown network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" successfully" Aug 13 09:05:11.641530 containerd[1509]: time="2025-08-13T09:05:11.606192802Z" level=info msg="StopPodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" returns successfully" Aug 13 09:05:11.641530 containerd[1509]: time="2025-08-13T09:05:11.637873830Z" level=info msg="RemovePodSandbox for \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\"" Aug 13 09:05:11.667276 containerd[1509]: time="2025-08-13T09:05:11.667202227Z" level=info msg="Forcibly stopping sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\"" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.838 [WARNING][6021] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0", GenerateName:"calico-kube-controllers-588668cdb7-", Namespace:"calico-system", SelfLink:"", UID:"4fef4dc0-3721-4011-811d-4e894024c3f2", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 9, 3, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"588668cdb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-cz57v.gb1.brightbox.com", ContainerID:"eb5b5600cd8051c55a62e13b196b49ed37cfa2174c7f3e4559b2d667660fddc3", Pod:"calico-kube-controllers-588668cdb7-gwwpt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0e2adb4b95d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.839 [INFO][6021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.839 [INFO][6021] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" iface="eth0" netns="" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.839 [INFO][6021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.839 [INFO][6021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.902 [INFO][6028] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.902 [INFO][6028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.903 [INFO][6028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.916 [WARNING][6028] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.916 [INFO][6028] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" HandleID="k8s-pod-network.b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Workload="srv--cz57v.gb1.brightbox.com-k8s-calico--kube--controllers--588668cdb7--gwwpt-eth0" Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.922 [INFO][6028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 09:05:11.930141 containerd[1509]: 2025-08-13 09:05:11.925 [INFO][6021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6" Aug 13 09:05:11.930141 containerd[1509]: time="2025-08-13T09:05:11.929892311Z" level=info msg="TearDown network for sandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" successfully" Aug 13 09:05:11.990995 containerd[1509]: time="2025-08-13T09:05:11.990906845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 09:05:11.991704 containerd[1509]: time="2025-08-13T09:05:11.991055969Z" level=info msg="RemovePodSandbox \"b25657b90a3d3b53920b670c3601664cc322cfdfb81ae7fe532fe73dda91acb6\" returns successfully" Aug 13 09:05:15.976571 systemd[1]: Started sshd@19-10.230.18.154:22-139.178.68.195:37342.service - OpenSSH per-connection server daemon (139.178.68.195:37342). Aug 13 09:05:17.068235 sshd[6038]: Accepted publickey for core from 139.178.68.195 port 37342 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:17.078135 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:17.100108 systemd-logind[1492]: New session 18 of user core. Aug 13 09:05:17.110366 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 09:05:18.553676 sshd[6038]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:18.566736 systemd[1]: sshd@19-10.230.18.154:22-139.178.68.195:37342.service: Deactivated successfully. Aug 13 09:05:18.585637 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 09:05:18.593614 systemd-logind[1492]: Session 18 logged out. Waiting for processes to exit. Aug 13 09:05:18.600647 systemd-logind[1492]: Removed session 18. Aug 13 09:05:23.711985 systemd[1]: Started sshd@20-10.230.18.154:22-139.178.68.195:49070.service - OpenSSH per-connection server daemon (139.178.68.195:49070). Aug 13 09:05:25.180324 sshd[6090]: Accepted publickey for core from 139.178.68.195 port 49070 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:25.185046 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:25.199048 systemd-logind[1492]: New session 19 of user core. Aug 13 09:05:25.207344 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 09:05:26.481611 sshd[6090]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:26.495212 systemd[1]: sshd@20-10.230.18.154:22-139.178.68.195:49070.service: Deactivated successfully. Aug 13 09:05:26.500573 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 09:05:26.502189 systemd-logind[1492]: Session 19 logged out. Waiting for processes to exit. Aug 13 09:05:26.505453 systemd-logind[1492]: Removed session 19. Aug 13 09:05:26.646525 systemd[1]: Started sshd@21-10.230.18.154:22-139.178.68.195:49072.service - OpenSSH per-connection server daemon (139.178.68.195:49072). Aug 13 09:05:27.603721 sshd[6109]: Accepted publickey for core from 139.178.68.195 port 49072 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:27.607175 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:27.626559 systemd-logind[1492]: New session 20 of user core. Aug 13 09:05:27.634253 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 09:05:28.737668 sshd[6109]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:28.744124 systemd-logind[1492]: Session 20 logged out. Waiting for processes to exit. Aug 13 09:05:28.745971 systemd[1]: sshd@21-10.230.18.154:22-139.178.68.195:49072.service: Deactivated successfully. Aug 13 09:05:28.751178 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 09:05:28.756002 systemd-logind[1492]: Removed session 20. Aug 13 09:05:28.902455 systemd[1]: Started sshd@22-10.230.18.154:22-139.178.68.195:49086.service - OpenSSH per-connection server daemon (139.178.68.195:49086). Aug 13 09:05:29.857113 sshd[6138]: Accepted publickey for core from 139.178.68.195 port 49086 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:29.860241 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:29.872311 systemd-logind[1492]: New session 21 of user core. Aug 13 09:05:29.880382 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 09:05:31.962362 sshd[6138]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:32.019708 systemd[1]: sshd@22-10.230.18.154:22-139.178.68.195:49086.service: Deactivated successfully. Aug 13 09:05:32.025635 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 09:05:32.028252 systemd-logind[1492]: Session 21 logged out. Waiting for processes to exit. Aug 13 09:05:32.041923 systemd-logind[1492]: Removed session 21. Aug 13 09:05:32.130509 systemd[1]: Started sshd@23-10.230.18.154:22-139.178.68.195:45008.service - OpenSSH per-connection server daemon (139.178.68.195:45008). Aug 13 09:05:33.115290 sshd[6181]: Accepted publickey for core from 139.178.68.195 port 45008 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:33.120212 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:33.132084 systemd-logind[1492]: New session 22 of user core. Aug 13 09:05:33.138292 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 09:05:34.864809 sshd[6181]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:34.873612 systemd-logind[1492]: Session 22 logged out. Waiting for processes to exit. Aug 13 09:05:34.874981 systemd[1]: sshd@23-10.230.18.154:22-139.178.68.195:45008.service: Deactivated successfully. Aug 13 09:05:34.884698 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 09:05:34.891927 systemd-logind[1492]: Removed session 22. Aug 13 09:05:35.047761 systemd[1]: Started sshd@24-10.230.18.154:22-139.178.68.195:45022.service - OpenSSH per-connection server daemon (139.178.68.195:45022). Aug 13 09:05:35.981849 sshd[6193]: Accepted publickey for core from 139.178.68.195 port 45022 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:35.984577 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:35.993839 systemd-logind[1492]: New session 23 of user core. Aug 13 09:05:36.002269 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 09:05:36.930909 sshd[6193]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:36.940495 systemd[1]: sshd@24-10.230.18.154:22-139.178.68.195:45022.service: Deactivated successfully. Aug 13 09:05:36.947497 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 09:05:36.952729 systemd-logind[1492]: Session 23 logged out. Waiting for processes to exit. Aug 13 09:05:36.954168 systemd-logind[1492]: Removed session 23. Aug 13 09:05:42.101607 systemd[1]: Started sshd@25-10.230.18.154:22-139.178.68.195:57366.service - OpenSSH per-connection server daemon (139.178.68.195:57366). Aug 13 09:05:43.179363 sshd[6238]: Accepted publickey for core from 139.178.68.195 port 57366 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:43.186495 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:43.203174 systemd-logind[1492]: New session 24 of user core. Aug 13 09:05:43.210342 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 09:05:44.447830 sshd[6238]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:44.453619 systemd[1]: sshd@25-10.230.18.154:22-139.178.68.195:57366.service: Deactivated successfully. Aug 13 09:05:44.458700 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 09:05:44.461767 systemd-logind[1492]: Session 24 logged out. Waiting for processes to exit. Aug 13 09:05:44.464426 systemd-logind[1492]: Removed session 24. Aug 13 09:05:49.619541 systemd[1]: Started sshd@26-10.230.18.154:22-139.178.68.195:57380.service - OpenSSH per-connection server daemon (139.178.68.195:57380). Aug 13 09:05:50.570545 sshd[6265]: Accepted publickey for core from 139.178.68.195 port 57380 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:50.574134 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:50.590830 systemd-logind[1492]: New session 25 of user core. Aug 13 09:05:50.598303 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 09:05:51.542977 sshd[6265]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:51.550181 systemd[1]: sshd@26-10.230.18.154:22-139.178.68.195:57380.service: Deactivated successfully. Aug 13 09:05:51.554046 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 09:05:51.556414 systemd-logind[1492]: Session 25 logged out. Waiting for processes to exit. Aug 13 09:05:51.557915 systemd-logind[1492]: Removed session 25. Aug 13 09:05:56.714815 systemd[1]: Started sshd@27-10.230.18.154:22-139.178.68.195:59176.service - OpenSSH per-connection server daemon (139.178.68.195:59176). Aug 13 09:05:57.685352 sshd[6279]: Accepted publickey for core from 139.178.68.195 port 59176 ssh2: RSA SHA256:YySM3kCv0HYbBvOmLDWWldq/i0LyCPP8OWutKnE8JMs Aug 13 09:05:57.689627 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 09:05:57.709987 systemd-logind[1492]: New session 26 of user core. Aug 13 09:05:57.717434 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 09:05:58.449721 systemd[1]: run-containerd-runc-k8s.io-482cdaf8d123152ce0d3d3f23c5b43750da9a106bd4027f4524c02870aa3aab4-runc.8fjhcl.mount: Deactivated successfully. Aug 13 09:05:58.827995 sshd[6279]: pam_unix(sshd:session): session closed for user core Aug 13 09:05:58.839970 systemd[1]: sshd@27-10.230.18.154:22-139.178.68.195:59176.service: Deactivated successfully. Aug 13 09:05:58.845958 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 09:05:58.851333 systemd-logind[1492]: Session 26 logged out. Waiting for processes to exit. Aug 13 09:05:58.853729 systemd-logind[1492]: Removed session 26. Aug 13 09:06:00.632841 systemd[1]: run-containerd-runc-k8s.io-5ee41ee7555ec48211abdce23f9a6f8f3236520d8a871e592a4a55eada14838d-runc.xAE732.mount: Deactivated successfully.