Nov 1 02:39:36.036382 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 02:39:36.036421 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 02:39:36.036447 kernel: BIOS-provided physical RAM map: Nov 1 02:39:36.036465 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 02:39:36.036475 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 02:39:36.036486 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 02:39:36.036497 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 1 02:39:36.036508 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 1 02:39:36.036518 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 02:39:36.036529 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 02:39:36.036539 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 02:39:36.036549 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 02:39:36.036565 kernel: NX (Execute Disable) protection: active Nov 1 02:39:36.036576 kernel: APIC: Static calls initialized Nov 1 02:39:36.036588 kernel: SMBIOS 2.8 present. Nov 1 02:39:36.036600 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 1 02:39:36.036612 kernel: Hypervisor detected: KVM Nov 1 02:39:36.036628 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 02:39:36.036639 kernel: kvm-clock: using sched offset of 4450171181 cycles Nov 1 02:39:36.036652 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 02:39:36.036664 kernel: tsc: Detected 2500.032 MHz processor Nov 1 02:39:36.036675 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 02:39:36.036687 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 02:39:36.036698 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 1 02:39:36.036710 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 02:39:36.036722 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 02:39:36.036738 kernel: Using GB pages for direct mapping Nov 1 02:39:36.036749 kernel: ACPI: Early table checksum verification disabled Nov 1 02:39:36.036761 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 1 02:39:36.036772 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 02:39:36.036784 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 02:39:36.036796 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 02:39:36.036807 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 1 02:39:36.036818 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 02:39:36.036842 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 02:39:36.036858 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 02:39:36.036869 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 02:39:36.037936 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 1 02:39:36.037957 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 1 02:39:36.037969 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 1 02:39:36.037990 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 1 02:39:36.038002 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 1 02:39:36.038019 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 1 02:39:36.038031 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 1 02:39:36.038043 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 02:39:36.038055 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 02:39:36.038067 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 1 02:39:36.038079 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 1 02:39:36.038091 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 1 02:39:36.038108 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 1 02:39:36.038120 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 1 02:39:36.038132 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 1 02:39:36.038144 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 1 02:39:36.038155 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 1 02:39:36.038167 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 1 02:39:36.038179 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 1 02:39:36.038191 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 1 02:39:36.038203 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 1 02:39:36.038215 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 1 02:39:36.038231 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 1 02:39:36.038243 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 02:39:36.038255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 02:39:36.038267 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 1 02:39:36.038279 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 1 02:39:36.038291 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 1 02:39:36.038304 kernel: Zone ranges: Nov 1 02:39:36.038316 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 02:39:36.038328 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 1 02:39:36.038345 kernel: Normal empty Nov 1 02:39:36.038357 kernel: Movable zone start for each node Nov 1 02:39:36.038369 kernel: Early memory node ranges Nov 1 02:39:36.038381 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 02:39:36.038393 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 1 02:39:36.038405 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 1 02:39:36.038417 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 02:39:36.038441 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 02:39:36.038454 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 1 02:39:36.038466 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 02:39:36.038484 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 02:39:36.038496 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 02:39:36.038508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 02:39:36.038520 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 02:39:36.038532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 02:39:36.038544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 02:39:36.038556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 02:39:36.038568 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 02:39:36.038579 kernel: TSC deadline timer available Nov 1 02:39:36.038596 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 1 02:39:36.038609 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 02:39:36.038620 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 02:39:36.038632 kernel: Booting paravirtualized kernel on KVM Nov 1 02:39:36.038644 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 02:39:36.038656 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 02:39:36.038668 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 02:39:36.038680 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 02:39:36.038692 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 02:39:36.038709 kernel: kvm-guest: PV spinlocks enabled Nov 1 02:39:36.038721 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 02:39:36.038734 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 02:39:36.038747 kernel: random: crng init done Nov 1 02:39:36.038759 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 02:39:36.038771 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 02:39:36.038783 kernel: Fallback order for Node 0: 0 Nov 1 02:39:36.038794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 1 02:39:36.038812 kernel: Policy zone: DMA32 Nov 1 02:39:36.038824 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 02:39:36.038836 kernel: software IO TLB: area num 16. Nov 1 02:39:36.038848 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 194832K reserved, 0K cma-reserved) Nov 1 02:39:36.038861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 02:39:36.039918 kernel: Kernel/User page tables isolation: enabled Nov 1 02:39:36.039933 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 02:39:36.039946 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 02:39:36.039958 kernel: Dynamic Preempt: voluntary Nov 1 02:39:36.039977 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 02:39:36.039991 kernel: rcu: RCU event tracing is enabled. Nov 1 02:39:36.040003 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 02:39:36.040015 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 02:39:36.040028 kernel: Rude variant of Tasks RCU enabled. Nov 1 02:39:36.040052 kernel: Tracing variant of Tasks RCU enabled. Nov 1 02:39:36.040070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 02:39:36.040083 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 02:39:36.040095 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 1 02:39:36.040108 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 02:39:36.040121 kernel: Console: colour VGA+ 80x25 Nov 1 02:39:36.040133 kernel: printk: console [tty0] enabled Nov 1 02:39:36.040151 kernel: printk: console [ttyS0] enabled Nov 1 02:39:36.040163 kernel: ACPI: Core revision 20230628 Nov 1 02:39:36.040176 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 02:39:36.040189 kernel: x2apic enabled Nov 1 02:39:36.040202 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 02:39:36.040219 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Nov 1 02:39:36.040232 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Nov 1 02:39:36.040245 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 02:39:36.040258 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 02:39:36.040270 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 02:39:36.040283 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 02:39:36.040295 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 02:39:36.040308 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 02:39:36.040320 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 02:39:36.040333 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 02:39:36.040351 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 02:39:36.040363 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 02:39:36.040376 kernel: MMIO Stale Data: Unknown: No mitigations Nov 1 02:39:36.040388 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 1 02:39:36.040400 kernel: active return thunk: its_return_thunk Nov 1 02:39:36.040413 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 02:39:36.040436 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 02:39:36.040451 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 02:39:36.040464 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 02:39:36.040476 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 02:39:36.040489 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 02:39:36.040507 kernel: Freeing SMP alternatives memory: 32K Nov 1 02:39:36.040520 kernel: pid_max: default: 32768 minimum: 301 Nov 1 02:39:36.040532 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 02:39:36.040545 kernel: landlock: Up and running. Nov 1 02:39:36.040557 kernel: SELinux: Initializing. Nov 1 02:39:36.040570 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 02:39:36.040582 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 02:39:36.040595 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 1 02:39:36.040608 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 02:39:36.040620 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 02:39:36.040638 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 02:39:36.040651 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 1 02:39:36.040664 kernel: signal: max sigframe size: 1776 Nov 1 02:39:36.040676 kernel: rcu: Hierarchical SRCU implementation. Nov 1 02:39:36.040689 kernel: rcu: Max phase no-delay instances is 400. Nov 1 02:39:36.040702 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 02:39:36.040715 kernel: smp: Bringing up secondary CPUs ... Nov 1 02:39:36.040727 kernel: smpboot: x86: Booting SMP configuration: Nov 1 02:39:36.040740 kernel: .... node #0, CPUs: #1 Nov 1 02:39:36.040757 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 1 02:39:36.040770 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 02:39:36.040783 kernel: smpboot: Max logical packages: 16 Nov 1 02:39:36.040796 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Nov 1 02:39:36.040808 kernel: devtmpfs: initialized Nov 1 02:39:36.040821 kernel: x86/mm: Memory block size: 128MB Nov 1 02:39:36.040834 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 02:39:36.040846 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 02:39:36.040859 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 02:39:36.040884 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 02:39:36.040904 kernel: audit: initializing netlink subsys (disabled) Nov 1 02:39:36.040917 kernel: audit: type=2000 audit(1761964774.776:1): state=initialized audit_enabled=0 res=1 Nov 1 02:39:36.040930 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 02:39:36.040942 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 02:39:36.040955 kernel: cpuidle: using governor menu Nov 1 02:39:36.040968 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 02:39:36.040980 kernel: dca service started, version 1.12.1 Nov 1 02:39:36.040993 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 02:39:36.041011 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 02:39:36.041024 kernel: PCI: Using configuration type 1 for base access Nov 1 02:39:36.041036 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 02:39:36.041049 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 02:39:36.041062 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 02:39:36.041075 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 02:39:36.041087 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 02:39:36.041100 kernel: ACPI: Added _OSI(Module Device) Nov 1 02:39:36.041113 kernel: ACPI: Added _OSI(Processor Device) Nov 1 02:39:36.041130 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 02:39:36.041143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 02:39:36.041156 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 02:39:36.041168 kernel: ACPI: Interpreter enabled Nov 1 02:39:36.041181 kernel: ACPI: PM: (supports S0 S5) Nov 1 02:39:36.041193 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 02:39:36.041206 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 02:39:36.041219 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 02:39:36.041231 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 02:39:36.041244 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 02:39:36.041573 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 02:39:36.041760 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 02:39:36.045074 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 02:39:36.045098 kernel: PCI host bridge to bus 0000:00 Nov 1 02:39:36.045290 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 02:39:36.045468 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 02:39:36.045647 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 02:39:36.045805 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 02:39:36.047517 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 02:39:36.047680 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 1 02:39:36.047840 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 02:39:36.048068 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 02:39:36.048306 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 1 02:39:36.048546 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 1 02:39:36.048722 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 1 02:39:36.050916 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 1 02:39:36.051138 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 02:39:36.051322 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.051519 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 1 02:39:36.051727 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.051984 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 1 02:39:36.052194 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.052393 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 1 02:39:36.052653 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.052826 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 1 02:39:36.055123 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.055406 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 1 02:39:36.055630 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.055808 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 1 02:39:36.056069 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.056262 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 1 02:39:36.056520 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 1 02:39:36.056699 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 1 02:39:36.058961 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 02:39:36.059174 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 02:39:36.059364 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 1 02:39:36.059584 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 02:39:36.059810 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 1 02:39:36.060050 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 1 02:39:36.060246 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 02:39:36.060445 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 1 02:39:36.060628 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 1 02:39:36.060807 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 02:39:36.061000 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 02:39:36.061184 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 02:39:36.061378 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 1 02:39:36.061579 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 1 02:39:36.061785 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 02:39:36.065164 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 02:39:36.065361 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 1 02:39:36.065557 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 1 02:39:36.065745 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 02:39:36.065996 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 02:39:36.066178 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 02:39:36.066386 kernel: pci_bus 0000:02: extended config space not accessible Nov 1 02:39:36.066627 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 1 02:39:36.066812 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 1 02:39:36.069080 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 02:39:36.069259 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 02:39:36.069461 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 1 02:39:36.069639 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 1 02:39:36.069830 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 02:39:36.070052 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 02:39:36.070223 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 02:39:36.070409 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 1 02:39:36.070608 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 02:39:36.070780 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 02:39:36.074998 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 02:39:36.075178 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 02:39:36.075352 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 02:39:36.075535 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 02:39:36.075706 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 02:39:36.075901 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 02:39:36.076072 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 02:39:36.076240 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 02:39:36.076411 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 02:39:36.076594 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 02:39:36.076764 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 02:39:36.079000 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 02:39:36.079171 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 02:39:36.079362 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 02:39:36.079556 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 02:39:36.079731 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 02:39:36.079928 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 02:39:36.079948 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 02:39:36.079962 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 02:39:36.079975 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 02:39:36.079988 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 02:39:36.080001 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 02:39:36.080022 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 02:39:36.080035 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 02:39:36.080048 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 02:39:36.080061 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 02:39:36.080074 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 02:39:36.080087 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 02:39:36.080099 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 02:39:36.080112 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 02:39:36.080125 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 02:39:36.080143 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 02:39:36.080156 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 02:39:36.080169 kernel: iommu: Default domain type: Translated Nov 1 02:39:36.080182 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 02:39:36.080194 kernel: PCI: Using ACPI for IRQ routing Nov 1 02:39:36.080207 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 02:39:36.080220 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 02:39:36.080233 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 1 02:39:36.080406 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 02:39:36.080602 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 02:39:36.080776 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 02:39:36.080796 kernel: vgaarb: loaded Nov 1 02:39:36.080822 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 02:39:36.080835 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 02:39:36.080847 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 02:39:36.080860 kernel: pnp: PnP ACPI init Nov 1 02:39:36.083099 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 02:39:36.083142 kernel: pnp: PnP ACPI: found 5 devices Nov 1 02:39:36.083154 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 02:39:36.083166 kernel: NET: Registered PF_INET protocol family Nov 1 02:39:36.083178 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 02:39:36.083203 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 02:39:36.083215 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 02:39:36.083227 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 02:39:36.083239 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 02:39:36.083269 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 02:39:36.083281 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 02:39:36.083293 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 02:39:36.083305 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 02:39:36.083329 kernel: NET: Registered PF_XDP protocol family Nov 1 02:39:36.083525 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 1 02:39:36.083698 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 1 02:39:36.083868 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 1 02:39:36.084064 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 1 02:39:36.084258 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 1 02:39:36.084472 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 02:39:36.084643 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 02:39:36.084822 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 02:39:36.086995 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 1 02:39:36.087188 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 1 02:39:36.087353 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 1 02:39:36.087548 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 1 02:39:36.087720 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 1 02:39:36.087908 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 1 02:39:36.088090 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 1 02:39:36.088265 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 1 02:39:36.088459 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 02:39:36.088665 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 02:39:36.088840 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 02:39:36.091072 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 1 02:39:36.091276 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 02:39:36.091471 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 02:39:36.091647 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 02:39:36.091823 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 1 02:39:36.092026 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 02:39:36.092206 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 02:39:36.092436 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 02:39:36.092619 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 1 02:39:36.092795 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 02:39:36.094063 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 02:39:36.094226 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 02:39:36.094420 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 1 02:39:36.094619 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 02:39:36.094792 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 02:39:36.096079 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 02:39:36.096325 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 1 02:39:36.096518 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 02:39:36.096706 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 02:39:36.098932 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 02:39:36.099138 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 1 02:39:36.099353 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 02:39:36.099549 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 02:39:36.099732 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 02:39:36.099959 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 1 02:39:36.100133 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 02:39:36.100348 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 02:39:36.100546 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 02:39:36.100715 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 1 02:39:36.100909 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 02:39:36.101081 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 02:39:36.101244 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 02:39:36.101442 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 02:39:36.101599 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 02:39:36.101760 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 02:39:36.101969 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 02:39:36.102127 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 1 02:39:36.102312 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 1 02:39:36.102495 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 1 02:39:36.102659 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 02:39:36.102839 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 02:39:36.103059 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 1 02:39:36.103232 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 02:39:36.103433 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 02:39:36.103610 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 1 02:39:36.103794 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 02:39:36.104033 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 02:39:36.104217 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 02:39:36.104403 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 02:39:36.104588 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 02:39:36.104776 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 1 02:39:36.105018 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 02:39:36.105180 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 02:39:36.105349 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 1 02:39:36.105524 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 02:39:36.105694 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 02:39:36.105913 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 1 02:39:36.106110 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 02:39:36.106269 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 02:39:36.106481 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 1 02:39:36.106643 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 02:39:36.106819 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 02:39:36.106845 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 02:39:36.106861 kernel: PCI: CLS 0 bytes, default 64 Nov 1 02:39:36.106886 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 02:39:36.106898 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 1 02:39:36.106911 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 02:39:36.106942 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Nov 1 02:39:36.106967 kernel: Initialise system trusted keyrings Nov 1 02:39:36.106981 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 02:39:36.107000 kernel: Key type asymmetric registered Nov 1 02:39:36.107012 kernel: Asymmetric key parser 'x509' registered Nov 1 02:39:36.107038 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 02:39:36.107051 kernel: io scheduler mq-deadline registered Nov 1 02:39:36.107065 kernel: io scheduler kyber registered Nov 1 02:39:36.107078 kernel: io scheduler bfq registered Nov 1 02:39:36.107248 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 02:39:36.107441 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 02:39:36.107629 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.107817 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 02:39:36.108028 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 02:39:36.108208 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.108380 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 02:39:36.108567 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 02:39:36.108745 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.108974 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 02:39:36.109147 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 02:39:36.109317 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.109503 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 02:39:36.109675 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 02:39:36.109877 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.110104 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 02:39:36.110285 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 02:39:36.110469 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.110641 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 02:39:36.110815 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 02:39:36.111035 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.111226 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 02:39:36.111396 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 02:39:36.111586 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 02:39:36.111608 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 02:39:36.111622 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 02:39:36.111637 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 02:39:36.111650 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 02:39:36.111671 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 02:39:36.111685 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 02:39:36.111711 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 02:39:36.111723 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 02:39:36.111736 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 02:39:36.111936 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 02:39:36.112105 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 02:39:36.112276 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T02:39:35 UTC (1761964775) Nov 1 02:39:36.112457 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 02:39:36.112478 kernel: intel_pstate: CPU model not supported Nov 1 02:39:36.112492 kernel: NET: Registered PF_INET6 protocol family Nov 1 02:39:36.112506 kernel: Segment Routing with IPv6 Nov 1 02:39:36.112520 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 02:39:36.112533 kernel: NET: Registered PF_PACKET protocol family Nov 1 02:39:36.112547 kernel: Key type dns_resolver registered Nov 1 02:39:36.112560 kernel: IPI shorthand broadcast: enabled Nov 1 02:39:36.112574 kernel: sched_clock: Marking stable (1370004054, 237672663)->(1731500441, -123823724) Nov 1 02:39:36.112594 kernel: registered taskstats version 1 Nov 1 02:39:36.112608 kernel: Loading compiled-in X.509 certificates Nov 1 02:39:36.112621 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 02:39:36.112635 kernel: Key type .fscrypt registered Nov 1 02:39:36.112648 kernel: Key type fscrypt-provisioning registered Nov 1 02:39:36.112661 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 02:39:36.112674 kernel: ima: Allocated hash algorithm: sha1 Nov 1 02:39:36.112688 kernel: ima: No architecture policies found Nov 1 02:39:36.112701 kernel: clk: Disabling unused clocks Nov 1 02:39:36.112720 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 02:39:36.112734 kernel: Write protecting the kernel read-only data: 36864k Nov 1 02:39:36.112747 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 02:39:36.112761 kernel: Run /init as init process Nov 1 02:39:36.112774 kernel: with arguments: Nov 1 02:39:36.112787 kernel: /init Nov 1 02:39:36.112801 kernel: with environment: Nov 1 02:39:36.112814 kernel: HOME=/ Nov 1 02:39:36.112827 kernel: TERM=linux Nov 1 02:39:36.112849 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 02:39:36.112888 systemd[1]: Detected virtualization kvm. Nov 1 02:39:36.112907 systemd[1]: Detected architecture x86-64. Nov 1 02:39:36.112920 systemd[1]: Running in initrd. Nov 1 02:39:36.112934 systemd[1]: No hostname configured, using default hostname. Nov 1 02:39:36.112948 systemd[1]: Hostname set to . Nov 1 02:39:36.112963 systemd[1]: Initializing machine ID from VM UUID. Nov 1 02:39:36.112984 systemd[1]: Queued start job for default target initrd.target. Nov 1 02:39:36.112999 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 02:39:36.113013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 02:39:36.113028 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 02:39:36.113042 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 02:39:36.113062 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 02:39:36.113077 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 02:39:36.113098 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 02:39:36.113113 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 02:39:36.113133 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 02:39:36.113147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 02:39:36.113162 systemd[1]: Reached target paths.target - Path Units. Nov 1 02:39:36.113181 systemd[1]: Reached target slices.target - Slice Units. Nov 1 02:39:36.113205 systemd[1]: Reached target swap.target - Swaps. Nov 1 02:39:36.113219 systemd[1]: Reached target timers.target - Timer Units. Nov 1 02:39:36.113239 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 02:39:36.113259 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 02:39:36.113273 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 02:39:36.113287 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 02:39:36.113302 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 02:39:36.113316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 02:39:36.113330 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 02:39:36.113345 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 02:39:36.113359 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 02:39:36.113379 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 02:39:36.113394 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 02:39:36.113408 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 02:39:36.113431 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 02:39:36.113448 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 02:39:36.113462 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 02:39:36.113476 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 02:39:36.113534 systemd-journald[202]: Collecting audit messages is disabled. Nov 1 02:39:36.113573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 02:39:36.113588 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 02:39:36.113609 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 02:39:36.113623 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 02:39:36.113637 kernel: Bridge firewalling registered Nov 1 02:39:36.113652 systemd-journald[202]: Journal started Nov 1 02:39:36.113684 systemd-journald[202]: Runtime Journal (/run/log/journal/0d040f76cb694f0d877256ae81bcdc1c) is 4.7M, max 38.0M, 33.2M free. Nov 1 02:39:36.057038 systemd-modules-load[203]: Inserted module 'overlay' Nov 1 02:39:36.154323 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 02:39:36.099101 systemd-modules-load[203]: Inserted module 'br_netfilter' Nov 1 02:39:36.155410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 02:39:36.156768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 02:39:36.158377 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 02:39:36.168117 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 02:39:36.171088 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 02:39:36.176078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 02:39:36.179713 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 02:39:36.203828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 02:39:36.208686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 02:39:36.218125 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 02:39:36.219309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 02:39:36.224372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 02:39:36.239218 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 02:39:36.241167 dracut-cmdline[234]: dracut-dracut-053 Nov 1 02:39:36.243229 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 02:39:36.281573 systemd-resolved[240]: Positive Trust Anchors: Nov 1 02:39:36.281596 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 02:39:36.281642 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 02:39:36.286188 systemd-resolved[240]: Defaulting to hostname 'linux'. Nov 1 02:39:36.289254 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 02:39:36.290265 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 02:39:36.365946 kernel: SCSI subsystem initialized Nov 1 02:39:36.378962 kernel: Loading iSCSI transport class v2.0-870. Nov 1 02:39:36.391904 kernel: iscsi: registered transport (tcp) Nov 1 02:39:36.419251 kernel: iscsi: registered transport (qla4xxx) Nov 1 02:39:36.419348 kernel: QLogic iSCSI HBA Driver Nov 1 02:39:36.475389 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 02:39:36.484127 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 02:39:36.518279 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 02:39:36.518371 kernel: device-mapper: uevent: version 1.0.3 Nov 1 02:39:36.520680 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 02:39:36.568936 kernel: raid6: sse2x4 gen() 13582 MB/s Nov 1 02:39:36.588463 kernel: raid6: sse2x2 gen() 9149 MB/s Nov 1 02:39:36.606499 kernel: raid6: sse2x1 gen() 9752 MB/s Nov 1 02:39:36.606550 kernel: raid6: using algorithm sse2x4 gen() 13582 MB/s Nov 1 02:39:36.625684 kernel: raid6: .... xor() 7757 MB/s, rmw enabled Nov 1 02:39:36.625782 kernel: raid6: using ssse3x2 recovery algorithm Nov 1 02:39:36.654912 kernel: xor: automatically using best checksumming function avx Nov 1 02:39:36.860911 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 02:39:36.877763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 02:39:36.896318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 02:39:36.915014 systemd-udevd[420]: Using default interface naming scheme 'v255'. Nov 1 02:39:36.922685 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 02:39:36.931363 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 02:39:36.961481 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Nov 1 02:39:37.009455 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 02:39:37.015096 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 02:39:37.139104 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 02:39:37.148168 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 02:39:37.167241 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 02:39:37.174641 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 02:39:37.176445 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 02:39:37.178930 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 02:39:37.187588 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 02:39:37.210903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 02:39:37.269910 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 1 02:39:37.284898 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 02:39:37.290012 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 02:39:37.315720 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 02:39:37.333055 kernel: ACPI: bus type USB registered Nov 1 02:39:37.333092 kernel: usbcore: registered new interface driver usbfs Nov 1 02:39:37.333132 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 02:39:37.333153 kernel: usbcore: registered new interface driver hub Nov 1 02:39:37.333171 kernel: GPT:17805311 != 125829119 Nov 1 02:39:37.333188 kernel: usbcore: registered new device driver usb Nov 1 02:39:37.333205 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 02:39:37.333223 kernel: GPT:17805311 != 125829119 Nov 1 02:39:37.333240 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 02:39:37.333258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 02:39:37.315954 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 02:39:37.334185 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 02:39:37.335571 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 02:39:37.335879 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 02:39:37.340214 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 02:39:37.349355 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 02:39:37.359903 kernel: AVX version of gcm_enc/dec engaged. Nov 1 02:39:37.365895 kernel: AES CTR mode by8 optimization enabled Nov 1 02:39:37.375216 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 02:39:37.375692 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 1 02:39:37.375985 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 02:39:37.380914 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 02:39:37.381203 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 1 02:39:37.381451 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 1 02:39:37.392903 kernel: hub 1-0:1.0: USB hub found Nov 1 02:39:37.393241 kernel: hub 1-0:1.0: 4 ports detected Nov 1 02:39:37.395909 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 02:39:37.400648 kernel: hub 2-0:1.0: USB hub found Nov 1 02:39:37.400946 kernel: hub 2-0:1.0: 4 ports detected Nov 1 02:39:37.407423 kernel: libata version 3.00 loaded. Nov 1 02:39:37.451926 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Nov 1 02:39:37.469901 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Nov 1 02:39:37.473768 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 02:39:37.509803 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 02:39:37.510238 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 02:39:37.510263 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 02:39:37.510486 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 02:39:37.510690 kernel: scsi host0: ahci Nov 1 02:39:37.511080 kernel: scsi host1: ahci Nov 1 02:39:37.511428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 02:39:37.516048 kernel: scsi host2: ahci Nov 1 02:39:37.516286 kernel: scsi host3: ahci Nov 1 02:39:37.516564 kernel: scsi host4: ahci Nov 1 02:39:37.518652 kernel: scsi host5: ahci Nov 1 02:39:37.519365 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Nov 1 02:39:37.521442 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Nov 1 02:39:37.523440 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Nov 1 02:39:37.525361 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Nov 1 02:39:37.527321 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Nov 1 02:39:37.528593 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 02:39:37.532810 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Nov 1 02:39:37.539458 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 02:39:37.545622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 02:39:37.546542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 02:39:37.565133 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 02:39:37.569091 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 02:39:37.579326 disk-uuid[563]: Primary Header is updated. Nov 1 02:39:37.579326 disk-uuid[563]: Secondary Entries is updated. Nov 1 02:39:37.579326 disk-uuid[563]: Secondary Header is updated. Nov 1 02:39:37.588060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 02:39:37.595911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 02:39:37.602808 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 02:39:37.604602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 02:39:37.634954 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 02:39:37.805945 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 02:39:37.844897 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 02:39:37.848240 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 02:39:37.848288 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 02:39:37.850004 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 02:39:37.851717 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 02:39:37.853961 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 02:39:37.864346 kernel: usbcore: registered new interface driver usbhid Nov 1 02:39:37.864413 kernel: usbhid: USB HID core driver Nov 1 02:39:37.872475 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 1 02:39:37.872517 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 1 02:39:38.597175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 02:39:38.598585 disk-uuid[564]: The operation has completed successfully. Nov 1 02:39:38.661015 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 02:39:38.661207 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 02:39:38.687094 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 02:39:38.690971 sh[586]: Success Nov 1 02:39:38.708945 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Nov 1 02:39:38.779365 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 02:39:38.782001 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 02:39:38.786719 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 02:39:38.813082 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 02:39:38.813138 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 02:39:38.815293 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 02:39:38.818703 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 02:39:38.818743 kernel: BTRFS info (device dm-0): using free space tree Nov 1 02:39:38.829859 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 02:39:38.832224 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 02:39:38.840137 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 02:39:38.844095 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 02:39:38.862554 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 02:39:38.862613 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 02:39:38.862634 kernel: BTRFS info (device vda6): using free space tree Nov 1 02:39:38.870948 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 02:39:38.885425 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 02:39:38.889371 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 02:39:38.898014 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 02:39:38.906056 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 02:39:39.005078 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 02:39:39.018157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 02:39:39.046578 ignition[695]: Ignition 2.19.0 Nov 1 02:39:39.046599 ignition[695]: Stage: fetch-offline Nov 1 02:39:39.046687 ignition[695]: no configs at "/usr/lib/ignition/base.d" Nov 1 02:39:39.050302 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 02:39:39.046711 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 02:39:39.046887 ignition[695]: parsed url from cmdline: "" Nov 1 02:39:39.046895 ignition[695]: no config URL provided Nov 1 02:39:39.046905 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 02:39:39.046922 ignition[695]: no config at "/usr/lib/ignition/user.ign" Nov 1 02:39:39.046930 ignition[695]: failed to fetch config: resource requires networking Nov 1 02:39:39.047424 ignition[695]: Ignition finished successfully Nov 1 02:39:39.068416 systemd-networkd[767]: lo: Link UP Nov 1 02:39:39.068433 systemd-networkd[767]: lo: Gained carrier Nov 1 02:39:39.070985 systemd-networkd[767]: Enumeration completed Nov 1 02:39:39.071121 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 02:39:39.071558 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 02:39:39.071565 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 02:39:39.073338 systemd[1]: Reached target network.target - Network. Nov 1 02:39:39.073397 systemd-networkd[767]: eth0: Link UP Nov 1 02:39:39.073403 systemd-networkd[767]: eth0: Gained carrier Nov 1 02:39:39.073421 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 02:39:39.081111 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 02:39:39.101527 ignition[774]: Ignition 2.19.0 Nov 1 02:39:39.101548 ignition[774]: Stage: fetch Nov 1 02:39:39.101805 ignition[774]: no configs at "/usr/lib/ignition/base.d" Nov 1 02:39:39.103635 systemd-networkd[767]: eth0: DHCPv4 address 10.230.26.18/30, gateway 10.230.26.17 acquired from 10.230.26.17 Nov 1 02:39:39.101824 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 02:39:39.101997 ignition[774]: parsed url from cmdline: "" Nov 1 02:39:39.102004 ignition[774]: no config URL provided Nov 1 02:39:39.102014 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 02:39:39.102029 ignition[774]: no config at "/usr/lib/ignition/user.ign" Nov 1 02:39:39.102262 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 1 02:39:39.102347 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 1 02:39:39.102396 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 1 02:39:39.102589 ignition[774]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 02:39:39.302797 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Nov 1 02:39:39.316639 ignition[774]: GET result: OK Nov 1 02:39:39.317286 ignition[774]: parsing config with SHA512: 7d64603a1821a7a853998c0f7f28e9b75686527fecb6fce9c89e4360a1c9b584ee3a4e528ba649c5998ba2d5ecebe9f1a5fa6c69978427b6b29a998f00c05909 Nov 1 02:39:39.322561 unknown[774]: fetched base config from "system" Nov 1 02:39:39.322578 unknown[774]: fetched base config from "system" Nov 1 02:39:39.323381 ignition[774]: fetch: fetch complete Nov 1 02:39:39.322588 unknown[774]: fetched user config from "openstack" Nov 1 02:39:39.323391 ignition[774]: fetch: fetch passed Nov 1 02:39:39.326046 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 02:39:39.323463 ignition[774]: Ignition finished successfully Nov 1 02:39:39.344023 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 02:39:39.361805 ignition[782]: Ignition 2.19.0 Nov 1 02:39:39.361820 ignition[782]: Stage: kargs Nov 1 02:39:39.362110 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 1 02:39:39.364724 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 02:39:39.362136 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 02:39:39.363328 ignition[782]: kargs: kargs passed Nov 1 02:39:39.363410 ignition[782]: Ignition finished successfully Nov 1 02:39:39.376104 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 02:39:39.399586 ignition[788]: Ignition 2.19.0 Nov 1 02:39:39.399615 ignition[788]: Stage: disks Nov 1 02:39:39.399947 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 1 02:39:39.403471 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 02:39:39.399967 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 02:39:39.401266 ignition[788]: disks: disks passed Nov 1 02:39:39.405188 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 02:39:39.401356 ignition[788]: Ignition finished successfully Nov 1 02:39:39.406852 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 02:39:39.408421 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 02:39:39.409932 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 02:39:39.411388 systemd[1]: Reached target basic.target - Basic System. Nov 1 02:39:39.426111 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 02:39:39.444629 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 02:39:39.447875 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 02:39:39.455035 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 02:39:39.574906 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 02:39:39.576439 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 02:39:39.578566 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 02:39:39.586999 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 02:39:39.589985 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 02:39:39.591597 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 02:39:39.595015 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 1 02:39:39.596204 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 02:39:39.612514 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (804) Nov 1 02:39:39.612550 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 02:39:39.612580 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 02:39:39.612605 kernel: BTRFS info (device vda6): using free space tree Nov 1 02:39:39.612624 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 02:39:39.596242 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 02:39:39.612935 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 02:39:39.624471 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 02:39:39.635182 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 02:39:39.719267 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 02:39:39.727243 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Nov 1 02:39:39.736891 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 02:39:39.747499 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 02:39:39.854143 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 02:39:39.860026 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 02:39:39.862113 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 02:39:39.875035 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 02:39:39.877289 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 02:39:39.909280 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 02:39:39.915903 ignition[922]: INFO : Ignition 2.19.0 Nov 1 02:39:39.915903 ignition[922]: INFO : Stage: mount Nov 1 02:39:39.915903 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 02:39:39.915903 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 02:39:39.920127 ignition[922]: INFO : mount: mount passed Nov 1 02:39:39.921946 ignition[922]: INFO : Ignition finished successfully Nov 1 02:39:39.922020 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 02:39:40.845441 systemd-networkd[767]: eth0: Gained IPv6LL Nov 1 02:39:42.352456 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8684:24:19ff:fee6:1a12/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8684:24:19ff:fee6:1a12/64 assigned by NDisc. Nov 1 02:39:42.352470 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 02:39:46.789326 coreos-metadata[806]: Nov 01 02:39:46.789 WARN failed to locate config-drive, using the metadata service API instead Nov 1 02:39:46.812950 coreos-metadata[806]: Nov 01 02:39:46.812 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 02:39:46.830114 coreos-metadata[806]: Nov 01 02:39:46.830 INFO Fetch successful Nov 1 02:39:46.831260 coreos-metadata[806]: Nov 01 02:39:46.831 INFO wrote hostname srv-liqqm.gb1.brightbox.com to /sysroot/etc/hostname Nov 1 02:39:46.834465 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 1 02:39:46.834657 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 1 02:39:46.844110 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 02:39:46.862311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 02:39:46.893957 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Nov 1 02:39:46.898919 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 02:39:46.898994 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 02:39:46.900038 kernel: BTRFS info (device vda6): using free space tree Nov 1 02:39:46.905947 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 02:39:46.909285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 02:39:46.945900 ignition[956]: INFO : Ignition 2.19.0 Nov 1 02:39:46.945900 ignition[956]: INFO : Stage: files Nov 1 02:39:46.945900 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 02:39:46.945900 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 02:39:46.949504 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Nov 1 02:39:46.950410 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 02:39:46.950410 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 02:39:46.955847 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 02:39:46.957161 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 02:39:46.958399 unknown[956]: wrote ssh authorized keys file for user: core Nov 1 02:39:46.959391 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 02:39:46.960461 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 02:39:46.960461 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 02:39:47.183567 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 02:39:47.413687 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 02:39:47.431107 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 02:39:47.431107 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 02:39:47.431107 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 02:39:47.431107 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 02:39:47.431107 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 02:39:47.431107 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 02:39:47.757238 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 02:39:48.870899 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 02:39:48.870899 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 02:39:48.883767 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 02:39:48.883767 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 02:39:48.883767 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 02:39:48.883767 ignition[956]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 02:39:48.883767 ignition[956]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 02:39:48.883767 ignition[956]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 02:39:48.883767 ignition[956]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 02:39:48.883767 ignition[956]: INFO : files: files passed Nov 1 02:39:48.883767 ignition[956]: INFO : Ignition finished successfully Nov 1 02:39:48.885274 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 02:39:48.896147 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 02:39:48.900220 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 02:39:48.910791 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 02:39:48.911915 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 02:39:48.921056 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 02:39:48.921056 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 02:39:48.924603 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 02:39:48.925690 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 02:39:48.927200 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 02:39:48.934132 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 02:39:48.971774 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 02:39:48.972014 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 02:39:48.974175 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 02:39:48.975277 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 02:39:48.976933 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 02:39:48.987167 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 02:39:49.005033 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 02:39:49.012123 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 02:39:49.038500 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 02:39:49.039516 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 02:39:49.041131 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 02:39:49.042612 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 02:39:49.042820 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 02:39:49.044668 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 02:39:49.045606 systemd[1]: Stopped target basic.target - Basic System. Nov 1 02:39:49.047116 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 02:39:49.048608 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 02:39:49.049950 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 02:39:49.051553 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 02:39:49.053072 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 02:39:49.054673 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 02:39:49.056167 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 02:39:49.057689 systemd[1]: Stopped target swap.target - Swaps. Nov 1 02:39:49.059151 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 02:39:49.059389 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 02:39:49.061134 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 02:39:49.062140 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 02:39:49.063541 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 02:39:49.063731 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 02:39:49.065073 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 02:39:49.065278 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 02:39:49.067274 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 02:39:49.067458 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 02:39:49.069085 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 02:39:49.069242 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 02:39:49.077217 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 02:39:49.078933 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 02:39:49.082730 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 02:39:49.083972 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 02:39:49.087341 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 02:39:49.088341 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 02:39:49.101810 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 02:39:49.102824 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 02:39:49.110229 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 02:39:49.121055 ignition[1009]: INFO : Ignition 2.19.0 Nov 1 02:39:49.121055 ignition[1009]: INFO : Stage: umount Nov 1 02:39:49.123583 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 02:39:49.123583 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 02:39:49.123583 ignition[1009]: INFO : umount: umount passed Nov 1 02:39:49.123583 ignition[1009]: INFO : Ignition finished successfully Nov 1 02:39:49.124930 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 02:39:49.126042 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 02:39:49.129560 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 02:39:49.129712 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 02:39:49.131206 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 02:39:49.131276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 02:39:49.132615 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 02:39:49.132697 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 02:39:49.134133 systemd[1]: Stopped target network.target - Network. Nov 1 02:39:49.135458 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 02:39:49.135528 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 02:39:49.137054 systemd[1]: Stopped target paths.target - Path Units. Nov 1 02:39:49.138419 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 02:39:49.141934 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 02:39:49.143219 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 02:39:49.144712 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 02:39:49.146549 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 02:39:49.146626 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 02:39:49.147819 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 02:39:49.147908 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 02:39:49.149188 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 02:39:49.149271 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 02:39:49.150535 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 02:39:49.150616 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 02:39:49.152242 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 02:39:49.154770 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 02:39:49.158142 systemd-networkd[767]: eth0: DHCPv6 lease lost Nov 1 02:39:49.162542 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 02:39:49.162718 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 02:39:49.166080 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 02:39:49.166968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 02:39:49.171149 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 02:39:49.171254 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 02:39:49.177993 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 02:39:49.179536 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 02:39:49.180496 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 02:39:49.182031 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 02:39:49.182101 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 02:39:49.183016 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 02:39:49.183087 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 02:39:49.185326 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 02:39:49.185395 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 02:39:49.188914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 02:39:49.202279 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 02:39:49.202535 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 02:39:49.204524 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 02:39:49.204661 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 02:39:49.207979 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 02:39:49.208101 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 02:39:49.208941 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 02:39:49.209013 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 02:39:49.210440 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 02:39:49.210517 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 02:39:49.212651 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 02:39:49.212718 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 02:39:49.214210 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 02:39:49.214284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 02:39:49.223090 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 02:39:49.223909 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 02:39:49.224005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 02:39:49.228097 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 02:39:49.228172 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 02:39:49.229629 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 02:39:49.229698 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 02:39:49.232355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 02:39:49.232425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 02:39:49.235659 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 02:39:49.235797 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 02:39:49.252219 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 02:39:49.253372 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 02:39:49.254379 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 02:39:49.255602 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 02:39:49.255689 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 02:39:49.274255 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 02:39:49.285317 systemd[1]: Switching root. Nov 1 02:39:49.320008 systemd-journald[202]: Journal stopped Nov 1 02:39:50.771389 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Nov 1 02:39:50.771520 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 02:39:50.771547 kernel: SELinux: policy capability open_perms=1 Nov 1 02:39:50.771567 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 02:39:50.771585 kernel: SELinux: policy capability always_check_network=0 Nov 1 02:39:50.771603 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 02:39:50.771623 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 02:39:50.771655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 02:39:50.771683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 02:39:50.771703 kernel: audit: type=1403 audit(1761964789.586:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 02:39:50.771730 systemd[1]: Successfully loaded SELinux policy in 51.503ms. Nov 1 02:39:50.771769 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.717ms. Nov 1 02:39:50.771794 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 02:39:50.771815 systemd[1]: Detected virtualization kvm. Nov 1 02:39:50.771835 systemd[1]: Detected architecture x86-64. Nov 1 02:39:50.771856 systemd[1]: Detected first boot. Nov 1 02:39:50.771908 systemd[1]: Hostname set to . Nov 1 02:39:50.771932 systemd[1]: Initializing machine ID from VM UUID. Nov 1 02:39:50.771980 zram_generator::config[1052]: No configuration found. Nov 1 02:39:50.772010 systemd[1]: Populated /etc with preset unit settings. Nov 1 02:39:50.772032 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 02:39:50.772053 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 02:39:50.772074 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 02:39:50.772096 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 02:39:50.772134 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 02:39:50.772163 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 02:39:50.772184 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 02:39:50.772204 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 02:39:50.772225 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 02:39:50.772247 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 02:39:50.772268 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 02:39:50.772290 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 02:39:50.772312 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 02:39:50.772345 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 02:39:50.772368 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 02:39:50.772389 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 02:39:50.772410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 02:39:50.772431 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 02:39:50.772451 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 02:39:50.772472 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 02:39:50.772509 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 02:39:50.772532 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 02:39:50.772553 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 02:39:50.772580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 02:39:50.772601 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 02:39:50.772622 systemd[1]: Reached target slices.target - Slice Units. Nov 1 02:39:50.772647 systemd[1]: Reached target swap.target - Swaps. Nov 1 02:39:50.772668 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 02:39:50.772701 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 02:39:50.772730 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 02:39:50.772780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 02:39:50.772803 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 02:39:50.772830 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 02:39:50.772853 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 02:39:50.774935 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 02:39:50.774974 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 02:39:50.775007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:50.775029 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 02:39:50.775050 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 02:39:50.775071 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 02:39:50.775093 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 02:39:50.775115 systemd[1]: Reached target machines.target - Containers. Nov 1 02:39:50.775150 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 02:39:50.775174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 02:39:50.775201 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 02:39:50.775230 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 02:39:50.775252 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 02:39:50.775273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 02:39:50.775295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 02:39:50.775316 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 02:39:50.775343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 02:39:50.775378 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 02:39:50.775400 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 02:39:50.775421 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 02:39:50.775442 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 02:39:50.775463 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 02:39:50.775483 kernel: fuse: init (API version 7.39) Nov 1 02:39:50.775503 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 02:39:50.775524 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 02:39:50.775551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 02:39:50.775586 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 02:39:50.775608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 02:39:50.775635 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 02:39:50.775657 systemd[1]: Stopped verity-setup.service. Nov 1 02:39:50.775678 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:50.775698 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 02:39:50.775731 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 02:39:50.775753 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 02:39:50.775787 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 02:39:50.775809 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 02:39:50.775830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 02:39:50.775850 kernel: loop: module loaded Nov 1 02:39:50.776903 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 02:39:50.776959 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 02:39:50.777020 systemd-journald[1145]: Collecting audit messages is disabled. Nov 1 02:39:50.777074 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 02:39:50.777099 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 02:39:50.777121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 02:39:50.777156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 02:39:50.777179 systemd-journald[1145]: Journal started Nov 1 02:39:50.777230 systemd-journald[1145]: Runtime Journal (/run/log/journal/0d040f76cb694f0d877256ae81bcdc1c) is 4.7M, max 38.0M, 33.2M free. Nov 1 02:39:50.392378 systemd[1]: Queued start job for default target multi-user.target. Nov 1 02:39:50.411699 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 02:39:50.779919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 02:39:50.412453 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 02:39:50.791108 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 02:39:50.786007 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 02:39:50.786254 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 02:39:50.787364 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 02:39:50.787562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 02:39:50.789564 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 02:39:50.790639 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 02:39:50.792940 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 02:39:50.800611 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 02:39:50.820940 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 02:39:50.830922 kernel: ACPI: bus type drm_connector registered Nov 1 02:39:50.831097 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 02:39:50.842687 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 02:39:50.843553 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 02:39:50.843607 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 02:39:50.847332 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 02:39:50.852109 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 02:39:50.856328 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 02:39:50.857288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 02:39:50.860171 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 02:39:50.866082 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 02:39:50.866956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 02:39:50.874100 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 02:39:50.875345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 02:39:50.879039 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 02:39:50.884109 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 02:39:50.899160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 02:39:50.905400 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 02:39:50.905675 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 02:39:50.907555 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 02:39:50.910123 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 02:39:50.913317 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 02:39:50.914612 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 02:39:50.931581 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 02:39:50.935219 systemd-journald[1145]: Time spent on flushing to /var/log/journal/0d040f76cb694f0d877256ae81bcdc1c is 137.719ms for 1143 entries. Nov 1 02:39:50.935219 systemd-journald[1145]: System Journal (/var/log/journal/0d040f76cb694f0d877256ae81bcdc1c) is 8.0M, max 584.8M, 576.8M free. Nov 1 02:39:51.106260 systemd-journald[1145]: Received client request to flush runtime journal. Nov 1 02:39:51.111272 kernel: loop0: detected capacity change from 0 to 8 Nov 1 02:39:51.111326 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 02:39:51.111355 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 02:39:50.941528 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 02:39:51.026373 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 02:39:51.054904 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 02:39:51.056585 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 02:39:51.094661 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 1 02:39:51.094685 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 1 02:39:51.118994 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 02:39:51.125010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 02:39:51.135365 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 02:39:51.140078 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 02:39:51.222473 kernel: loop3: detected capacity change from 0 to 142488 Nov 1 02:39:51.283167 kernel: loop4: detected capacity change from 0 to 8 Nov 1 02:39:51.285835 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 02:39:51.290180 kernel: loop5: detected capacity change from 0 to 219144 Nov 1 02:39:51.298207 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 02:39:51.331990 kernel: loop6: detected capacity change from 0 to 140768 Nov 1 02:39:51.348826 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 1 02:39:51.348862 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 1 02:39:51.359310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 02:39:51.367541 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 02:39:51.378933 kernel: loop7: detected capacity change from 0 to 142488 Nov 1 02:39:51.384163 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 02:39:51.405376 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 1 02:39:51.406894 (sd-merge)[1207]: Merged extensions into '/usr'. Nov 1 02:39:51.421406 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 02:39:51.421446 systemd[1]: Reloading... Nov 1 02:39:51.597927 zram_generator::config[1239]: No configuration found. Nov 1 02:39:51.725608 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 02:39:51.867308 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:39:51.935021 systemd[1]: Reloading finished in 512 ms. Nov 1 02:39:51.971495 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 02:39:51.973321 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 02:39:51.991674 systemd[1]: Starting ensure-sysext.service... Nov 1 02:39:51.995641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 02:39:51.997399 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 02:39:52.012554 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 02:39:52.019122 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Nov 1 02:39:52.019146 systemd[1]: Reloading... Nov 1 02:39:52.043315 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 02:39:52.044506 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 02:39:52.046175 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 02:39:52.046722 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Nov 1 02:39:52.046985 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Nov 1 02:39:52.052638 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 02:39:52.052811 systemd-tmpfiles[1297]: Skipping /boot Nov 1 02:39:52.070650 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 02:39:52.070836 systemd-tmpfiles[1297]: Skipping /boot Nov 1 02:39:52.120777 zram_generator::config[1323]: No configuration found. Nov 1 02:39:52.291264 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:39:52.359057 systemd[1]: Reloading finished in 339 ms. Nov 1 02:39:52.408710 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 02:39:52.418168 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 02:39:52.423094 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 02:39:52.433545 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 02:39:52.439134 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 02:39:52.449109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 02:39:52.460147 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 02:39:52.471355 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:52.472116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 02:39:52.482180 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 02:39:52.488204 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 02:39:52.492273 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 02:39:52.494072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 02:39:52.495062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:52.497975 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 02:39:52.499577 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 02:39:52.500679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 02:39:52.513040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:52.513325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 02:39:52.523146 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 02:39:52.524727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 02:39:52.530499 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Nov 1 02:39:52.535212 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 02:39:52.547030 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 02:39:52.548316 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:52.550328 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 02:39:52.551666 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 02:39:52.567964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 02:39:52.568225 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 02:39:52.573327 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:52.573763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 02:39:52.583198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 02:39:52.594141 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 02:39:52.596117 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 02:39:52.596205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 02:39:52.596265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:39:52.596710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 02:39:52.599938 systemd[1]: Finished ensure-sysext.service. Nov 1 02:39:52.601041 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 02:39:52.603474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 02:39:52.603689 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 02:39:52.609959 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 02:39:52.648130 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 02:39:52.660927 augenrules[1434]: No rules Nov 1 02:39:52.663124 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 02:39:52.665015 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 02:39:52.665659 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 02:39:52.667392 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 02:39:52.668963 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 02:39:52.680471 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 02:39:52.693470 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 02:39:52.695463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 02:39:52.703657 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 02:39:52.732075 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 02:39:52.779864 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 02:39:52.814947 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1427) Nov 1 02:39:52.891913 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 02:39:52.966909 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 02:39:52.990948 kernel: ACPI: button: Power Button [PWRF] Nov 1 02:39:53.004467 systemd-networkd[1431]: lo: Link UP Nov 1 02:39:53.004481 systemd-networkd[1431]: lo: Gained carrier Nov 1 02:39:53.017174 systemd-networkd[1431]: Enumeration completed Nov 1 02:39:53.017815 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 02:39:53.017829 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 02:39:53.018039 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 02:39:53.024649 systemd-networkd[1431]: eth0: Link UP Nov 1 02:39:53.024663 systemd-networkd[1431]: eth0: Gained carrier Nov 1 02:39:53.024681 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 02:39:53.029127 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 02:39:53.038420 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 02:39:53.039902 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 02:39:53.045118 systemd-resolved[1388]: Positive Trust Anchors: Nov 1 02:39:53.045142 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 02:39:53.045187 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 02:39:53.048142 systemd-networkd[1431]: eth0: DHCPv4 address 10.230.26.18/30, gateway 10.230.26.17 acquired from 10.230.26.17 Nov 1 02:39:53.056036 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Nov 1 02:39:53.057060 systemd-resolved[1388]: Using system hostname 'srv-liqqm.gb1.brightbox.com'. Nov 1 02:39:53.060759 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 02:39:53.061696 systemd[1]: Reached target network.target - Network. Nov 1 02:39:53.062368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 02:39:53.073427 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 02:39:53.084132 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 02:39:53.100607 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 02:39:53.109894 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 02:39:53.110263 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 02:39:53.113897 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 02:39:53.114216 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 02:39:53.158101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 02:39:53.341931 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 02:39:53.405357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 02:39:53.419285 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 02:39:54.312802 systemd-timesyncd[1433]: Contacted time server 109.74.197.50:123 (0.flatcar.pool.ntp.org). Nov 1 02:39:54.312894 systemd-timesyncd[1433]: Initial clock synchronization to Sat 2025-11-01 02:39:54.312546 UTC. Nov 1 02:39:54.313034 systemd-resolved[1388]: Clock change detected. Flushing caches. Nov 1 02:39:54.328514 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 02:39:54.364191 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 02:39:54.365623 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 02:39:54.366544 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 02:39:54.367659 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 02:39:54.368603 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 02:39:54.369778 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 02:39:54.370746 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 02:39:54.371614 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 02:39:54.372535 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 02:39:54.372679 systemd[1]: Reached target paths.target - Path Units. Nov 1 02:39:54.373469 systemd[1]: Reached target timers.target - Timer Units. Nov 1 02:39:54.375644 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 02:39:54.378285 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 02:39:54.383711 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 02:39:54.386242 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 02:39:54.387657 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 02:39:54.388514 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 02:39:54.389166 systemd[1]: Reached target basic.target - Basic System. Nov 1 02:39:54.389975 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 02:39:54.390022 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 02:39:54.393635 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 02:39:54.400028 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 02:39:54.405767 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 02:39:54.410516 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 02:39:54.417635 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 02:39:54.428770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 02:39:54.431176 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 02:39:54.440914 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 02:39:54.448000 jq[1480]: false Nov 1 02:39:54.454585 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 02:39:54.461680 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 02:39:54.467665 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 02:39:54.477672 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 02:39:54.486754 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 02:39:54.487752 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 02:39:54.501476 extend-filesystems[1481]: Found loop4 Nov 1 02:39:54.501476 extend-filesystems[1481]: Found loop5 Nov 1 02:39:54.501476 extend-filesystems[1481]: Found loop6 Nov 1 02:39:54.501476 extend-filesystems[1481]: Found loop7 Nov 1 02:39:54.501476 extend-filesystems[1481]: Found vda Nov 1 02:39:54.501476 extend-filesystems[1481]: Found vda1 Nov 1 02:39:54.501476 extend-filesystems[1481]: Found vda2 Nov 1 02:39:54.497679 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 02:39:54.557506 extend-filesystems[1481]: Found vda3 Nov 1 02:39:54.557506 extend-filesystems[1481]: Found usr Nov 1 02:39:54.557506 extend-filesystems[1481]: Found vda4 Nov 1 02:39:54.557506 extend-filesystems[1481]: Found vda6 Nov 1 02:39:54.557506 extend-filesystems[1481]: Found vda7 Nov 1 02:39:54.557506 extend-filesystems[1481]: Found vda9 Nov 1 02:39:54.557506 extend-filesystems[1481]: Checking size of /dev/vda9 Nov 1 02:39:54.510208 dbus-daemon[1479]: [system] SELinux support is enabled Nov 1 02:39:54.510336 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 02:39:54.621091 update_engine[1494]: I20251101 02:39:54.543097 1494 main.cc:92] Flatcar Update Engine starting Nov 1 02:39:54.621091 update_engine[1494]: I20251101 02:39:54.545040 1494 update_check_scheduler.cc:74] Next update check in 10m23s Nov 1 02:39:54.625904 extend-filesystems[1481]: Resized partition /dev/vda9 Nov 1 02:39:54.651718 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 1 02:39:54.524806 dbus-daemon[1479]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1431 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 02:39:54.513092 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 02:39:54.652512 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Nov 1 02:39:54.538219 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 02:39:54.521505 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 02:39:54.527064 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 02:39:54.528544 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 02:39:54.556999 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 02:39:54.665949 jq[1497]: true Nov 1 02:39:54.558207 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 02:39:54.666296 tar[1505]: linux-amd64/LICENSE Nov 1 02:39:54.666296 tar[1505]: linux-amd64/helm Nov 1 02:39:54.568010 systemd[1]: Started update-engine.service - Update Engine. Nov 1 02:39:54.578567 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 02:39:54.580601 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 02:39:54.699007 jq[1508]: true Nov 1 02:39:54.582883 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 02:39:54.582934 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 02:39:54.594659 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 02:39:54.595413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 02:39:54.595493 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 02:39:54.601477 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 02:39:54.617211 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 02:39:54.618527 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 02:39:54.789667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1425) Nov 1 02:39:54.891092 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 02:39:54.891142 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 02:39:54.891574 systemd-logind[1490]: New seat seat0. Nov 1 02:39:54.893999 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 02:39:54.925736 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Nov 1 02:39:54.926734 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 02:39:54.948989 systemd[1]: Starting sshkeys.service... Nov 1 02:39:54.987130 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 02:39:55.002197 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 02:39:55.011465 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 02:39:55.011465 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 02:39:55.011465 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 02:39:55.008854 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 02:39:55.025751 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Nov 1 02:39:55.009850 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 02:39:55.012841 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 02:39:55.013114 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 02:39:55.040604 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 02:39:55.041007 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 02:39:55.048553 dbus-daemon[1479]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1512 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 02:39:55.062938 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 02:39:55.090713 polkitd[1555]: Started polkitd version 121 Nov 1 02:39:55.102957 polkitd[1555]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 02:39:55.103069 polkitd[1555]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 02:39:55.107489 polkitd[1555]: Finished loading, compiling and executing 2 rules Nov 1 02:39:55.111763 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 02:39:55.113008 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 02:39:55.115492 polkitd[1555]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 02:39:55.132466 containerd[1506]: time="2025-11-01T02:39:55.128599185Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 02:39:55.143685 systemd-hostnamed[1512]: Hostname set to (static) Nov 1 02:39:55.199562 containerd[1506]: time="2025-11-01T02:39:55.199495224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:39:55.202946 containerd[1506]: time="2025-11-01T02:39:55.202898606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:39:55.203066 containerd[1506]: time="2025-11-01T02:39:55.203041625Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 02:39:55.203160 containerd[1506]: time="2025-11-01T02:39:55.203136256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 02:39:55.203594 containerd[1506]: time="2025-11-01T02:39:55.203566319Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 02:39:55.203696 containerd[1506]: time="2025-11-01T02:39:55.203671285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 02:39:55.203882 containerd[1506]: time="2025-11-01T02:39:55.203853661Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:39:55.203972 containerd[1506]: time="2025-11-01T02:39:55.203950010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:39:55.204294 containerd[1506]: time="2025-11-01T02:39:55.204263519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:39:55.204419 containerd[1506]: time="2025-11-01T02:39:55.204379814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 02:39:55.204540 containerd[1506]: time="2025-11-01T02:39:55.204513927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:39:55.204625 containerd[1506]: time="2025-11-01T02:39:55.204603139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 02:39:55.204822 containerd[1506]: time="2025-11-01T02:39:55.204796491Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:39:55.205315 containerd[1506]: time="2025-11-01T02:39:55.205288548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:39:55.205683 containerd[1506]: time="2025-11-01T02:39:55.205560308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:39:55.205683 containerd[1506]: time="2025-11-01T02:39:55.205590739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 02:39:55.206306 containerd[1506]: time="2025-11-01T02:39:55.205984819Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 02:39:55.206306 containerd[1506]: time="2025-11-01T02:39:55.206085617Z" level=info msg="metadata content store policy set" policy=shared Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.209510567Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.209598500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.209629815Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.209655789Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.209687530Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.209891386Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210246787Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210432382Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210480768Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210503782Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210532415Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210561507Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210589769Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211479 containerd[1506]: time="2025-11-01T02:39:55.210613291Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210634814Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210663533Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210692576Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210713575Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210752843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210777517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210798444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210819286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210838463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210858409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210877003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210895834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210916367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.211981 containerd[1506]: time="2025-11-01T02:39:55.210939989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.210960722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.210979518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.210999725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211029911Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211075806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211099254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211117309Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211202054Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211274693Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211298191Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211318572Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211334000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211353324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 02:39:55.212410 containerd[1506]: time="2025-11-01T02:39:55.211375012Z" level=info msg="NRI interface is disabled by configuration." Nov 1 02:39:55.212860 containerd[1506]: time="2025-11-01T02:39:55.211393774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 02:39:55.213578 containerd[1506]: time="2025-11-01T02:39:55.213486946Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 02:39:55.213936 containerd[1506]: time="2025-11-01T02:39:55.213909686Z" level=info msg="Connect containerd service" Nov 1 02:39:55.214094 containerd[1506]: time="2025-11-01T02:39:55.214066072Z" level=info msg="using legacy CRI server" Nov 1 02:39:55.214199 containerd[1506]: time="2025-11-01T02:39:55.214175125Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 02:39:55.214664 containerd[1506]: time="2025-11-01T02:39:55.214631861Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 02:39:55.216152 containerd[1506]: time="2025-11-01T02:39:55.216118351Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 02:39:55.216501 containerd[1506]: time="2025-11-01T02:39:55.216394015Z" level=info msg="Start subscribing containerd event" Nov 1 02:39:55.216573 containerd[1506]: time="2025-11-01T02:39:55.216527748Z" level=info msg="Start recovering state" Nov 1 02:39:55.216679 containerd[1506]: time="2025-11-01T02:39:55.216650984Z" level=info msg="Start event monitor" Nov 1 02:39:55.216730 containerd[1506]: time="2025-11-01T02:39:55.216691159Z" level=info msg="Start snapshots syncer" Nov 1 02:39:55.216730 containerd[1506]: time="2025-11-01T02:39:55.216713564Z" level=info msg="Start cni network conf syncer for default" Nov 1 02:39:55.216795 containerd[1506]: time="2025-11-01T02:39:55.216732161Z" level=info msg="Start streaming server" Nov 1 02:39:55.217613 containerd[1506]: time="2025-11-01T02:39:55.217585238Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 02:39:55.217791 containerd[1506]: time="2025-11-01T02:39:55.217765002Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 02:39:55.218009 containerd[1506]: time="2025-11-01T02:39:55.217983904Z" level=info msg="containerd successfully booted in 0.094300s" Nov 1 02:39:55.218101 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 02:39:55.235396 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 02:39:55.268045 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 02:39:55.277099 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 02:39:55.288036 systemd[1]: Started sshd@0-10.230.26.18:22-147.75.109.163:36936.service - OpenSSH per-connection server daemon (147.75.109.163:36936). Nov 1 02:39:55.314282 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 02:39:55.316645 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 02:39:55.330929 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 02:39:55.359711 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 02:39:55.369135 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 02:39:55.380908 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 02:39:55.383034 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 02:39:55.495843 systemd-networkd[1431]: eth0: Gained IPv6LL Nov 1 02:39:55.503834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 02:39:55.508906 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 02:39:55.518824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:39:55.531898 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 02:39:55.558351 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 02:39:55.706550 tar[1505]: linux-amd64/README.md Nov 1 02:39:55.722846 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 02:39:56.205837 sshd[1577]: Accepted publickey for core from 147.75.109.163 port 36936 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:39:56.209890 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:39:56.231020 systemd-logind[1490]: New session 1 of user core. Nov 1 02:39:56.233317 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 02:39:56.240767 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 02:39:56.269684 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 02:39:56.283946 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 02:39:56.306088 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:39:56.450217 systemd[1604]: Queued start job for default target default.target. Nov 1 02:39:56.458541 systemd[1604]: Created slice app.slice - User Application Slice. Nov 1 02:39:56.459176 systemd[1604]: Reached target paths.target - Paths. Nov 1 02:39:56.459299 systemd[1604]: Reached target timers.target - Timers. Nov 1 02:39:56.462609 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 02:39:56.480572 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 02:39:56.480773 systemd[1604]: Reached target sockets.target - Sockets. Nov 1 02:39:56.480799 systemd[1604]: Reached target basic.target - Basic System. Nov 1 02:39:56.480867 systemd[1604]: Reached target default.target - Main User Target. Nov 1 02:39:56.480933 systemd[1604]: Startup finished in 163ms. Nov 1 02:39:56.481144 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 02:39:56.488936 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 02:39:56.530533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:39:56.546003 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 02:39:57.087099 kubelet[1618]: E1101 02:39:57.087026 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:39:57.090327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:39:57.090761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:39:57.091614 systemd[1]: kubelet.service: Consumed 1.016s CPU time. Nov 1 02:39:57.139957 systemd[1]: Started sshd@1-10.230.26.18:22-147.75.109.163:36948.service - OpenSSH per-connection server daemon (147.75.109.163:36948). Nov 1 02:39:58.031127 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 36948 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:39:58.033227 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:39:58.039848 systemd-logind[1490]: New session 2 of user core. Nov 1 02:39:58.050901 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 02:39:58.053022 systemd-networkd[1431]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8684:24:19ff:fee6:1a12/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8684:24:19ff:fee6:1a12/64 assigned by NDisc. Nov 1 02:39:58.053034 systemd-networkd[1431]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 02:39:58.654868 sshd[1627]: pam_unix(sshd:session): session closed for user core Nov 1 02:39:58.658610 systemd-logind[1490]: Session 2 logged out. Waiting for processes to exit. Nov 1 02:39:58.660303 systemd[1]: sshd@1-10.230.26.18:22-147.75.109.163:36948.service: Deactivated successfully. Nov 1 02:39:58.663050 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 02:39:58.665698 systemd-logind[1490]: Removed session 2. Nov 1 02:39:58.818954 systemd[1]: Started sshd@2-10.230.26.18:22-147.75.109.163:36954.service - OpenSSH per-connection server daemon (147.75.109.163:36954). Nov 1 02:39:59.727873 sshd[1637]: Accepted publickey for core from 147.75.109.163 port 36954 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:39:59.730070 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:39:59.736797 systemd-logind[1490]: New session 3 of user core. Nov 1 02:39:59.747305 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 02:40:00.360389 sshd[1637]: pam_unix(sshd:session): session closed for user core Nov 1 02:40:00.366468 systemd[1]: sshd@2-10.230.26.18:22-147.75.109.163:36954.service: Deactivated successfully. Nov 1 02:40:00.369352 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 02:40:00.370432 systemd-logind[1490]: Session 3 logged out. Waiting for processes to exit. Nov 1 02:40:00.372082 systemd-logind[1490]: Removed session 3. Nov 1 02:40:00.440141 login[1585]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 02:40:00.444889 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 02:40:00.447602 systemd-logind[1490]: New session 4 of user core. Nov 1 02:40:00.460801 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 02:40:00.465220 systemd-logind[1490]: New session 5 of user core. Nov 1 02:40:00.470163 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 02:40:01.542125 coreos-metadata[1478]: Nov 01 02:40:01.541 WARN failed to locate config-drive, using the metadata service API instead Nov 1 02:40:01.568880 coreos-metadata[1478]: Nov 01 02:40:01.568 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 1 02:40:01.577368 coreos-metadata[1478]: Nov 01 02:40:01.577 INFO Fetch failed with 404: resource not found Nov 1 02:40:01.577368 coreos-metadata[1478]: Nov 01 02:40:01.577 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 02:40:01.578175 coreos-metadata[1478]: Nov 01 02:40:01.578 INFO Fetch successful Nov 1 02:40:01.578381 coreos-metadata[1478]: Nov 01 02:40:01.578 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 1 02:40:01.593692 coreos-metadata[1478]: Nov 01 02:40:01.593 INFO Fetch successful Nov 1 02:40:01.594239 coreos-metadata[1478]: Nov 01 02:40:01.594 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 1 02:40:01.608513 coreos-metadata[1478]: Nov 01 02:40:01.608 INFO Fetch successful Nov 1 02:40:01.609044 coreos-metadata[1478]: Nov 01 02:40:01.609 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 1 02:40:01.634172 coreos-metadata[1478]: Nov 01 02:40:01.634 INFO Fetch successful Nov 1 02:40:01.634528 coreos-metadata[1478]: Nov 01 02:40:01.634 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 1 02:40:01.668366 coreos-metadata[1478]: Nov 01 02:40:01.668 INFO Fetch successful Nov 1 02:40:01.707752 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 02:40:01.708760 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 02:40:02.152009 coreos-metadata[1551]: Nov 01 02:40:02.151 WARN failed to locate config-drive, using the metadata service API instead Nov 1 02:40:02.174906 coreos-metadata[1551]: Nov 01 02:40:02.174 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 1 02:40:02.199045 coreos-metadata[1551]: Nov 01 02:40:02.198 INFO Fetch successful Nov 1 02:40:02.199208 coreos-metadata[1551]: Nov 01 02:40:02.199 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 02:40:02.228936 coreos-metadata[1551]: Nov 01 02:40:02.228 INFO Fetch successful Nov 1 02:40:02.238731 unknown[1551]: wrote ssh authorized keys file for user: core Nov 1 02:40:02.256934 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Nov 1 02:40:02.257639 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 02:40:02.260947 systemd[1]: Finished sshkeys.service. Nov 1 02:40:02.262350 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 02:40:02.262798 systemd[1]: Startup finished in 1.556s (kernel) + 13.824s (initrd) + 11.835s (userspace) = 27.216s. Nov 1 02:40:07.092155 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 02:40:07.102790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:07.369370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:07.383912 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 02:40:07.451214 kubelet[1690]: E1101 02:40:07.451125 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:40:07.454940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:40:07.455205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:40:10.522376 systemd[1]: Started sshd@3-10.230.26.18:22-147.75.109.163:60418.service - OpenSSH per-connection server daemon (147.75.109.163:60418). Nov 1 02:40:11.433661 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 60418 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:40:11.436651 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:40:11.444884 systemd-logind[1490]: New session 6 of user core. Nov 1 02:40:11.454671 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 02:40:12.061785 sshd[1698]: pam_unix(sshd:session): session closed for user core Nov 1 02:40:12.066147 systemd[1]: sshd@3-10.230.26.18:22-147.75.109.163:60418.service: Deactivated successfully. Nov 1 02:40:12.068971 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 02:40:12.071230 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Nov 1 02:40:12.073043 systemd-logind[1490]: Removed session 6. Nov 1 02:40:12.237007 systemd[1]: Started sshd@4-10.230.26.18:22-147.75.109.163:60424.service - OpenSSH per-connection server daemon (147.75.109.163:60424). Nov 1 02:40:13.130510 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 60424 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:40:13.132595 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:40:13.139131 systemd-logind[1490]: New session 7 of user core. Nov 1 02:40:13.150723 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 02:40:13.751436 sshd[1705]: pam_unix(sshd:session): session closed for user core Nov 1 02:40:13.754885 systemd[1]: sshd@4-10.230.26.18:22-147.75.109.163:60424.service: Deactivated successfully. Nov 1 02:40:13.756845 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 02:40:13.758854 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Nov 1 02:40:13.760416 systemd-logind[1490]: Removed session 7. Nov 1 02:40:13.919173 systemd[1]: Started sshd@5-10.230.26.18:22-147.75.109.163:60430.service - OpenSSH per-connection server daemon (147.75.109.163:60430). Nov 1 02:40:14.813924 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 60430 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:40:14.815982 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:40:14.824187 systemd-logind[1490]: New session 8 of user core. Nov 1 02:40:14.832902 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 02:40:15.440863 sshd[1712]: pam_unix(sshd:session): session closed for user core Nov 1 02:40:15.445484 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Nov 1 02:40:15.446386 systemd[1]: sshd@5-10.230.26.18:22-147.75.109.163:60430.service: Deactivated successfully. Nov 1 02:40:15.448610 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 02:40:15.449686 systemd-logind[1490]: Removed session 8. Nov 1 02:40:15.597359 systemd[1]: Started sshd@6-10.230.26.18:22-147.75.109.163:60446.service - OpenSSH per-connection server daemon (147.75.109.163:60446). Nov 1 02:40:16.508328 sshd[1719]: Accepted publickey for core from 147.75.109.163 port 60446 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:40:16.510396 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:40:16.517701 systemd-logind[1490]: New session 9 of user core. Nov 1 02:40:16.528827 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 02:40:17.003642 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 02:40:17.004144 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 02:40:17.017550 sudo[1722]: pam_unix(sudo:session): session closed for user root Nov 1 02:40:17.163275 sshd[1719]: pam_unix(sshd:session): session closed for user core Nov 1 02:40:17.168822 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Nov 1 02:40:17.169812 systemd[1]: sshd@6-10.230.26.18:22-147.75.109.163:60446.service: Deactivated successfully. Nov 1 02:40:17.172009 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 02:40:17.173359 systemd-logind[1490]: Removed session 9. Nov 1 02:40:17.318519 systemd[1]: Started sshd@7-10.230.26.18:22-147.75.109.163:60460.service - OpenSSH per-connection server daemon (147.75.109.163:60460). Nov 1 02:40:17.591007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 02:40:17.610141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:17.755755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:17.762409 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 02:40:17.867916 kubelet[1737]: E1101 02:40:17.867732 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:40:17.870339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:40:17.870594 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:40:18.233183 sshd[1727]: Accepted publickey for core from 147.75.109.163 port 60460 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:40:18.235404 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:40:18.243342 systemd-logind[1490]: New session 10 of user core. Nov 1 02:40:18.251242 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 02:40:18.718203 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 02:40:18.718699 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 02:40:18.724130 sudo[1746]: pam_unix(sudo:session): session closed for user root Nov 1 02:40:18.732290 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 02:40:18.732907 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 02:40:18.749809 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 02:40:18.768580 auditctl[1749]: No rules Nov 1 02:40:18.769211 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 02:40:18.769549 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 02:40:18.776927 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 02:40:18.824306 augenrules[1767]: No rules Nov 1 02:40:18.825818 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 02:40:18.827601 sudo[1745]: pam_unix(sudo:session): session closed for user root Nov 1 02:40:18.975809 sshd[1727]: pam_unix(sshd:session): session closed for user core Nov 1 02:40:18.979686 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Nov 1 02:40:18.980867 systemd[1]: sshd@7-10.230.26.18:22-147.75.109.163:60460.service: Deactivated successfully. Nov 1 02:40:18.982883 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 02:40:18.985160 systemd-logind[1490]: Removed session 10. Nov 1 02:40:19.136893 systemd[1]: Started sshd@8-10.230.26.18:22-147.75.109.163:60468.service - OpenSSH per-connection server daemon (147.75.109.163:60468). Nov 1 02:40:20.031777 sshd[1775]: Accepted publickey for core from 147.75.109.163 port 60468 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:40:20.034733 sshd[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:40:20.044657 systemd-logind[1490]: New session 11 of user core. Nov 1 02:40:20.058785 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 02:40:20.511719 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 02:40:20.512179 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 02:40:21.025038 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 02:40:21.025356 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 02:40:21.490626 dockerd[1795]: time="2025-11-01T02:40:21.490519908Z" level=info msg="Starting up" Nov 1 02:40:21.633114 dockerd[1795]: time="2025-11-01T02:40:21.632616717Z" level=info msg="Loading containers: start." Nov 1 02:40:21.798593 kernel: Initializing XFRM netlink socket Nov 1 02:40:21.920681 systemd-networkd[1431]: docker0: Link UP Nov 1 02:40:21.939361 dockerd[1795]: time="2025-11-01T02:40:21.939317431Z" level=info msg="Loading containers: done." Nov 1 02:40:21.961504 dockerd[1795]: time="2025-11-01T02:40:21.961235592Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 02:40:21.961504 dockerd[1795]: time="2025-11-01T02:40:21.961373417Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 02:40:21.962286 dockerd[1795]: time="2025-11-01T02:40:21.961891399Z" level=info msg="Daemon has completed initialization" Nov 1 02:40:21.999177 dockerd[1795]: time="2025-11-01T02:40:21.998840805Z" level=info msg="API listen on /run/docker.sock" Nov 1 02:40:21.999599 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 02:40:23.012565 containerd[1506]: time="2025-11-01T02:40:23.011901518Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 02:40:23.999251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605682083.mount: Deactivated successfully. Nov 1 02:40:25.742183 containerd[1506]: time="2025-11-01T02:40:25.742036400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:25.744048 containerd[1506]: time="2025-11-01T02:40:25.743822049Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Nov 1 02:40:25.745471 containerd[1506]: time="2025-11-01T02:40:25.744932884Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:25.749477 containerd[1506]: time="2025-11-01T02:40:25.749260699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:25.751230 containerd[1506]: time="2025-11-01T02:40:25.750946172Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.738916303s" Nov 1 02:40:25.751230 containerd[1506]: time="2025-11-01T02:40:25.751008581Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 02:40:25.752903 containerd[1506]: time="2025-11-01T02:40:25.752853862Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 02:40:28.088939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 02:40:28.096818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:28.098850 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 02:40:28.293025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:28.306222 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 02:40:28.515390 kubelet[2007]: E1101 02:40:28.514767 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:40:28.517196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:40:28.517876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:40:29.582648 containerd[1506]: time="2025-11-01T02:40:29.581615915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:29.584199 containerd[1506]: time="2025-11-01T02:40:29.583855895Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Nov 1 02:40:29.585645 containerd[1506]: time="2025-11-01T02:40:29.584965658Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:29.589208 containerd[1506]: time="2025-11-01T02:40:29.589156048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:29.590968 containerd[1506]: time="2025-11-01T02:40:29.590923252Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 3.838018814s" Nov 1 02:40:29.591097 containerd[1506]: time="2025-11-01T02:40:29.590972526Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 02:40:29.592209 containerd[1506]: time="2025-11-01T02:40:29.592171333Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 02:40:31.332323 containerd[1506]: time="2025-11-01T02:40:31.330679454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:31.332323 containerd[1506]: time="2025-11-01T02:40:31.332256946Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Nov 1 02:40:31.333480 containerd[1506]: time="2025-11-01T02:40:31.333400600Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:31.337575 containerd[1506]: time="2025-11-01T02:40:31.337527453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:31.339675 containerd[1506]: time="2025-11-01T02:40:31.339548186Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.74731148s" Nov 1 02:40:31.339675 containerd[1506]: time="2025-11-01T02:40:31.339635901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 02:40:31.340741 containerd[1506]: time="2025-11-01T02:40:31.340477848Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 02:40:33.893332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1713530415.mount: Deactivated successfully. Nov 1 02:40:34.553275 containerd[1506]: time="2025-11-01T02:40:34.553186193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:34.556074 containerd[1506]: time="2025-11-01T02:40:34.556024414Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Nov 1 02:40:34.558477 containerd[1506]: time="2025-11-01T02:40:34.557228681Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:34.559890 containerd[1506]: time="2025-11-01T02:40:34.559839859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:34.561371 containerd[1506]: time="2025-11-01T02:40:34.561335567Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 3.220790193s" Nov 1 02:40:34.561550 containerd[1506]: time="2025-11-01T02:40:34.561521238Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 02:40:34.562706 containerd[1506]: time="2025-11-01T02:40:34.562584402Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 02:40:35.293292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount856820262.mount: Deactivated successfully. Nov 1 02:40:36.803035 containerd[1506]: time="2025-11-01T02:40:36.802840374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:36.805477 containerd[1506]: time="2025-11-01T02:40:36.805138616Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Nov 1 02:40:36.806389 containerd[1506]: time="2025-11-01T02:40:36.806321884Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:36.812111 containerd[1506]: time="2025-11-01T02:40:36.811140826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:36.814213 containerd[1506]: time="2025-11-01T02:40:36.814167344Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.251513888s" Nov 1 02:40:36.814331 containerd[1506]: time="2025-11-01T02:40:36.814257783Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 02:40:36.816787 containerd[1506]: time="2025-11-01T02:40:36.816752465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 02:40:37.563287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703102841.mount: Deactivated successfully. Nov 1 02:40:37.577757 containerd[1506]: time="2025-11-01T02:40:37.577660049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:37.579036 containerd[1506]: time="2025-11-01T02:40:37.578980736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Nov 1 02:40:37.580319 containerd[1506]: time="2025-11-01T02:40:37.579898663Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:37.584534 containerd[1506]: time="2025-11-01T02:40:37.583766070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:37.585052 containerd[1506]: time="2025-11-01T02:40:37.585008178Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 768.21181ms" Nov 1 02:40:37.585160 containerd[1506]: time="2025-11-01T02:40:37.585054760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 02:40:37.586317 containerd[1506]: time="2025-11-01T02:40:37.586184033Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 02:40:38.591910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 02:40:38.604567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:38.979863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:38.994225 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 02:40:39.073867 kubelet[2130]: E1101 02:40:39.073738 2130 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:40:39.078425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:40:39.078990 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:40:40.057872 update_engine[1494]: I20251101 02:40:40.057591 1494 update_attempter.cc:509] Updating boot flags... Nov 1 02:40:40.120482 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2144) Nov 1 02:40:40.203526 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2147) Nov 1 02:40:40.280464 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2147) Nov 1 02:40:43.881711 containerd[1506]: time="2025-11-01T02:40:43.881528589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:43.885455 containerd[1506]: time="2025-11-01T02:40:43.885318135Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Nov 1 02:40:43.889297 containerd[1506]: time="2025-11-01T02:40:43.889228181Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:43.894477 containerd[1506]: time="2025-11-01T02:40:43.894223542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:40:43.896630 containerd[1506]: time="2025-11-01T02:40:43.896004311Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 6.309749233s" Nov 1 02:40:43.896630 containerd[1506]: time="2025-11-01T02:40:43.896100811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 02:40:47.905034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:47.921854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:47.965531 systemd[1]: Reloading requested from client PID 2185 ('systemctl') (unit session-11.scope)... Nov 1 02:40:47.965577 systemd[1]: Reloading... Nov 1 02:40:48.155482 zram_generator::config[2221]: No configuration found. Nov 1 02:40:48.311625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:40:48.422866 systemd[1]: Reloading finished in 456 ms. Nov 1 02:40:48.508904 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 02:40:48.509090 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 02:40:48.509746 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:48.517019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:48.673834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:48.681169 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 02:40:48.865050 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 02:40:48.866114 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 02:40:48.943632 kubelet[2293]: I1101 02:40:48.942762 2293 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 02:40:49.325048 kubelet[2293]: I1101 02:40:49.324766 2293 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 02:40:49.325607 kubelet[2293]: I1101 02:40:49.325418 2293 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 02:40:49.327497 kubelet[2293]: I1101 02:40:49.327431 2293 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 02:40:49.327497 kubelet[2293]: I1101 02:40:49.327485 2293 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 02:40:49.327885 kubelet[2293]: I1101 02:40:49.327848 2293 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 02:40:49.343576 kubelet[2293]: I1101 02:40:49.343519 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 02:40:49.345330 kubelet[2293]: E1101 02:40:49.344632 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.26.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 02:40:49.356658 kubelet[2293]: E1101 02:40:49.356595 2293 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 02:40:49.356847 kubelet[2293]: I1101 02:40:49.356703 2293 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 02:40:49.384733 kubelet[2293]: I1101 02:40:49.384654 2293 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 02:40:49.385897 kubelet[2293]: I1101 02:40:49.385835 2293 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 02:40:49.389270 kubelet[2293]: I1101 02:40:49.385886 2293 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-liqqm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 02:40:49.389270 kubelet[2293]: I1101 02:40:49.388584 2293 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 02:40:49.389270 kubelet[2293]: I1101 02:40:49.388637 2293 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 02:40:49.389270 kubelet[2293]: I1101 02:40:49.388890 2293 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 02:40:49.393097 kubelet[2293]: I1101 02:40:49.393072 2293 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:40:49.394908 kubelet[2293]: I1101 02:40:49.394885 2293 kubelet.go:475] "Attempting to sync node with API server" Nov 1 02:40:49.394996 kubelet[2293]: I1101 02:40:49.394914 2293 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 02:40:49.394996 kubelet[2293]: I1101 02:40:49.394961 2293 kubelet.go:387] "Adding apiserver pod source" Nov 1 02:40:49.396049 kubelet[2293]: I1101 02:40:49.394997 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 02:40:49.398292 kubelet[2293]: I1101 02:40:49.398131 2293 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 02:40:49.400292 kubelet[2293]: I1101 02:40:49.400265 2293 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 02:40:49.400549 kubelet[2293]: I1101 02:40:49.400412 2293 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 02:40:49.405462 kubelet[2293]: W1101 02:40:49.404719 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 02:40:49.406971 kubelet[2293]: E1101 02:40:49.406838 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.26.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-liqqm.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 02:40:49.409277 kubelet[2293]: E1101 02:40:49.406993 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.26.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 02:40:49.411320 kubelet[2293]: I1101 02:40:49.411299 2293 server.go:1262] "Started kubelet" Nov 1 02:40:49.414147 kubelet[2293]: I1101 02:40:49.414124 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 02:40:49.421485 kubelet[2293]: E1101 02:40:49.418854 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.26.18:6443/api/v1/namespaces/default/events\": dial tcp 10.230.26.18:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-liqqm.gb1.brightbox.com.1873c1b1fbe1e39b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-liqqm.gb1.brightbox.com,UID:srv-liqqm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-liqqm.gb1.brightbox.com,},FirstTimestamp:2025-11-01 02:40:49.411253147 +0000 UTC m=+0.724030653,LastTimestamp:2025-11-01 02:40:49.411253147 +0000 UTC m=+0.724030653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-liqqm.gb1.brightbox.com,}" Nov 1 02:40:49.421869 kubelet[2293]: I1101 02:40:49.421810 2293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 02:40:49.426468 kubelet[2293]: I1101 02:40:49.425680 2293 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 02:40:49.426468 kubelet[2293]: E1101 02:40:49.425973 2293 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-liqqm.gb1.brightbox.com\" not found" Nov 1 02:40:49.426468 kubelet[2293]: I1101 02:40:49.426342 2293 server.go:310] "Adding debug handlers to kubelet server" Nov 1 02:40:49.426468 kubelet[2293]: I1101 02:40:49.426350 2293 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 02:40:49.426710 kubelet[2293]: I1101 02:40:49.426646 2293 reconciler.go:29] "Reconciler: start to sync state" Nov 1 02:40:49.433563 kubelet[2293]: I1101 02:40:49.433520 2293 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 02:40:49.433765 kubelet[2293]: I1101 02:40:49.433739 2293 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 02:40:49.434141 kubelet[2293]: I1101 02:40:49.434118 2293 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 02:40:49.435366 kubelet[2293]: I1101 02:40:49.435341 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 02:40:49.438872 kubelet[2293]: E1101 02:40:49.438832 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.26.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 02:40:49.439181 kubelet[2293]: E1101 02:40:49.439114 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-liqqm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.18:6443: connect: connection refused" interval="200ms" Nov 1 02:40:49.439918 kubelet[2293]: I1101 02:40:49.439896 2293 factory.go:223] Registration of the systemd container factory successfully Nov 1 02:40:49.440213 kubelet[2293]: I1101 02:40:49.440187 2293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 02:40:49.441856 kubelet[2293]: E1101 02:40:49.441833 2293 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 02:40:49.443837 kubelet[2293]: I1101 02:40:49.443812 2293 factory.go:223] Registration of the containerd container factory successfully Nov 1 02:40:49.456660 kubelet[2293]: I1101 02:40:49.456591 2293 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 02:40:49.458207 kubelet[2293]: I1101 02:40:49.458175 2293 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 02:40:49.458281 kubelet[2293]: I1101 02:40:49.458215 2293 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 02:40:49.458281 kubelet[2293]: I1101 02:40:49.458263 2293 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 02:40:49.458376 kubelet[2293]: E1101 02:40:49.458338 2293 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 02:40:49.467126 kubelet[2293]: E1101 02:40:49.467094 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.26.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 02:40:49.487144 kubelet[2293]: I1101 02:40:49.487110 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 02:40:49.487885 kubelet[2293]: I1101 02:40:49.487559 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 02:40:49.487885 kubelet[2293]: I1101 02:40:49.487598 2293 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:40:49.489584 kubelet[2293]: I1101 02:40:49.489530 2293 policy_none.go:49] "None policy: Start" Nov 1 02:40:49.489777 kubelet[2293]: I1101 02:40:49.489731 2293 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 02:40:49.489983 kubelet[2293]: I1101 02:40:49.489895 2293 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 02:40:49.492117 kubelet[2293]: I1101 02:40:49.491152 2293 policy_none.go:47] "Start" Nov 1 02:40:49.500146 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 02:40:49.520997 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 02:40:49.526613 kubelet[2293]: E1101 02:40:49.526575 2293 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-liqqm.gb1.brightbox.com\" not found" Nov 1 02:40:49.526994 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 02:40:49.537674 kubelet[2293]: E1101 02:40:49.536639 2293 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 02:40:49.537674 kubelet[2293]: I1101 02:40:49.536927 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 02:40:49.537674 kubelet[2293]: I1101 02:40:49.536951 2293 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 02:40:49.537674 kubelet[2293]: I1101 02:40:49.537356 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 02:40:49.540014 kubelet[2293]: E1101 02:40:49.539992 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 02:40:49.540711 kubelet[2293]: E1101 02:40:49.540636 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-liqqm.gb1.brightbox.com\" not found" Nov 1 02:40:49.591355 systemd[1]: Created slice kubepods-burstable-podf59da9f7e8eb2222d069dc409a645043.slice - libcontainer container kubepods-burstable-podf59da9f7e8eb2222d069dc409a645043.slice. Nov 1 02:40:49.605012 kubelet[2293]: E1101 02:40:49.604909 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.609297 systemd[1]: Created slice kubepods-burstable-pod761b46ce54a5f0a05f9809f7f737f171.slice - libcontainer container kubepods-burstable-pod761b46ce54a5f0a05f9809f7f737f171.slice. Nov 1 02:40:49.612931 kubelet[2293]: E1101 02:40:49.612649 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.616341 systemd[1]: Created slice kubepods-burstable-podd9f8a74bcd121718911506fdcf1b4639.slice - libcontainer container kubepods-burstable-podd9f8a74bcd121718911506fdcf1b4639.slice. Nov 1 02:40:49.618832 kubelet[2293]: E1101 02:40:49.618806 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.640479 kubelet[2293]: I1101 02:40:49.640393 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.640833 kubelet[2293]: E1101 02:40:49.640793 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.26.18:6443/api/v1/nodes\": dial tcp 10.230.26.18:6443: connect: connection refused" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.640947 kubelet[2293]: E1101 02:40:49.640894 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-liqqm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.18:6443: connect: connection refused" interval="400ms" Nov 1 02:40:49.727739 kubelet[2293]: I1101 02:40:49.727666 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-kubeconfig\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728104 kubelet[2293]: I1101 02:40:49.727742 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/761b46ce54a5f0a05f9809f7f737f171-kubeconfig\") pod \"kube-scheduler-srv-liqqm.gb1.brightbox.com\" (UID: \"761b46ce54a5f0a05f9809f7f737f171\") " pod="kube-system/kube-scheduler-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728104 kubelet[2293]: I1101 02:40:49.727822 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9f8a74bcd121718911506fdcf1b4639-ca-certs\") pod \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" (UID: \"d9f8a74bcd121718911506fdcf1b4639\") " pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728104 kubelet[2293]: I1101 02:40:49.727863 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9f8a74bcd121718911506fdcf1b4639-k8s-certs\") pod \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" (UID: \"d9f8a74bcd121718911506fdcf1b4639\") " pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728104 kubelet[2293]: I1101 02:40:49.727903 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9f8a74bcd121718911506fdcf1b4639-usr-share-ca-certificates\") pod \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" (UID: \"d9f8a74bcd121718911506fdcf1b4639\") " pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728104 kubelet[2293]: I1101 02:40:49.727939 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-flexvolume-dir\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728360 kubelet[2293]: I1101 02:40:49.727965 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728360 kubelet[2293]: I1101 02:40:49.728022 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-ca-certs\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.728360 kubelet[2293]: I1101 02:40:49.728085 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-k8s-certs\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.845299 kubelet[2293]: I1101 02:40:49.845118 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.845996 kubelet[2293]: E1101 02:40:49.845941 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.26.18:6443/api/v1/nodes\": dial tcp 10.230.26.18:6443: connect: connection refused" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:49.908910 containerd[1506]: time="2025-11-01T02:40:49.908806530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-liqqm.gb1.brightbox.com,Uid:f59da9f7e8eb2222d069dc409a645043,Namespace:kube-system,Attempt:0,}" Nov 1 02:40:49.923178 containerd[1506]: time="2025-11-01T02:40:49.923138190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-liqqm.gb1.brightbox.com,Uid:761b46ce54a5f0a05f9809f7f737f171,Namespace:kube-system,Attempt:0,}" Nov 1 02:40:49.923870 containerd[1506]: time="2025-11-01T02:40:49.923485734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-liqqm.gb1.brightbox.com,Uid:d9f8a74bcd121718911506fdcf1b4639,Namespace:kube-system,Attempt:0,}" Nov 1 02:40:50.042613 kubelet[2293]: E1101 02:40:50.042524 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-liqqm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.18:6443: connect: connection refused" interval="800ms" Nov 1 02:40:50.259698 kubelet[2293]: I1101 02:40:50.259148 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:50.260229 kubelet[2293]: E1101 02:40:50.259843 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.26.18:6443/api/v1/nodes\": dial tcp 10.230.26.18:6443: connect: connection refused" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:50.418623 kubelet[2293]: E1101 02:40:50.418560 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.26.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 02:40:50.670724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2078895825.mount: Deactivated successfully. Nov 1 02:40:50.697074 containerd[1506]: time="2025-11-01T02:40:50.696991013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 02:40:50.702951 containerd[1506]: time="2025-11-01T02:40:50.702806074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 02:40:50.706040 containerd[1506]: time="2025-11-01T02:40:50.705912195Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 02:40:50.707145 containerd[1506]: time="2025-11-01T02:40:50.707105907Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 02:40:50.708393 containerd[1506]: time="2025-11-01T02:40:50.708274429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 1 02:40:50.709247 containerd[1506]: time="2025-11-01T02:40:50.709130665Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 02:40:50.709247 containerd[1506]: time="2025-11-01T02:40:50.709185321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 02:40:50.720090 containerd[1506]: time="2025-11-01T02:40:50.720014216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 02:40:50.721529 containerd[1506]: time="2025-11-01T02:40:50.721226457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 812.200549ms" Nov 1 02:40:50.724733 containerd[1506]: time="2025-11-01T02:40:50.724664445Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 800.951609ms" Nov 1 02:40:50.732463 containerd[1506]: time="2025-11-01T02:40:50.731639530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 807.987605ms" Nov 1 02:40:50.788568 kubelet[2293]: E1101 02:40:50.788277 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.26.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 02:40:50.845200 kubelet[2293]: E1101 02:40:50.843663 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.26.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 02:40:50.845200 kubelet[2293]: E1101 02:40:50.843903 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.26.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-liqqm.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.26.18:6443: connect: connection refused" interval="1.6s" Nov 1 02:40:50.846235 kubelet[2293]: E1101 02:40:50.846198 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.26.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-liqqm.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 02:40:50.953285 containerd[1506]: time="2025-11-01T02:40:50.952717953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:40:50.953285 containerd[1506]: time="2025-11-01T02:40:50.952827342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:40:50.953285 containerd[1506]: time="2025-11-01T02:40:50.952845570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:40:50.953285 containerd[1506]: time="2025-11-01T02:40:50.953006996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:40:50.954791 containerd[1506]: time="2025-11-01T02:40:50.954624060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:40:50.954791 containerd[1506]: time="2025-11-01T02:40:50.954694002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:40:50.955052 containerd[1506]: time="2025-11-01T02:40:50.954979759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:40:50.955764 containerd[1506]: time="2025-11-01T02:40:50.955335107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:40:50.955764 containerd[1506]: time="2025-11-01T02:40:50.955416805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:40:50.955764 containerd[1506]: time="2025-11-01T02:40:50.955481405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:40:50.955764 containerd[1506]: time="2025-11-01T02:40:50.955629803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:40:50.956227 containerd[1506]: time="2025-11-01T02:40:50.956046756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:40:51.001735 systemd[1]: Started cri-containerd-1e811eef13b72f184687560390fe64672921942a1a918fd247f0959d88f3ee68.scope - libcontainer container 1e811eef13b72f184687560390fe64672921942a1a918fd247f0959d88f3ee68. Nov 1 02:40:51.017657 systemd[1]: Started cri-containerd-16263c4774d4001ecab003b1bc223a3a568779cd87f5922241ec39f30399e84f.scope - libcontainer container 16263c4774d4001ecab003b1bc223a3a568779cd87f5922241ec39f30399e84f. Nov 1 02:40:51.028643 systemd[1]: Started cri-containerd-0f5c076132a1e836510a220e089ba5eed7edfb9aee04ad6f8e33c6dea49c4b8c.scope - libcontainer container 0f5c076132a1e836510a220e089ba5eed7edfb9aee04ad6f8e33c6dea49c4b8c. Nov 1 02:40:51.065784 kubelet[2293]: I1101 02:40:51.064664 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:51.065784 kubelet[2293]: E1101 02:40:51.065154 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.26.18:6443/api/v1/nodes\": dial tcp 10.230.26.18:6443: connect: connection refused" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:51.134905 containerd[1506]: time="2025-11-01T02:40:51.133892064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-liqqm.gb1.brightbox.com,Uid:f59da9f7e8eb2222d069dc409a645043,Namespace:kube-system,Attempt:0,} returns sandbox id \"16263c4774d4001ecab003b1bc223a3a568779cd87f5922241ec39f30399e84f\"" Nov 1 02:40:51.138858 containerd[1506]: time="2025-11-01T02:40:51.138737920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-liqqm.gb1.brightbox.com,Uid:d9f8a74bcd121718911506fdcf1b4639,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5c076132a1e836510a220e089ba5eed7edfb9aee04ad6f8e33c6dea49c4b8c\"" Nov 1 02:40:51.144526 containerd[1506]: time="2025-11-01T02:40:51.144476322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-liqqm.gb1.brightbox.com,Uid:761b46ce54a5f0a05f9809f7f737f171,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e811eef13b72f184687560390fe64672921942a1a918fd247f0959d88f3ee68\"" Nov 1 02:40:51.156450 containerd[1506]: time="2025-11-01T02:40:51.156227393Z" level=info msg="CreateContainer within sandbox \"1e811eef13b72f184687560390fe64672921942a1a918fd247f0959d88f3ee68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 02:40:51.158501 containerd[1506]: time="2025-11-01T02:40:51.157606074Z" level=info msg="CreateContainer within sandbox \"0f5c076132a1e836510a220e089ba5eed7edfb9aee04ad6f8e33c6dea49c4b8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 02:40:51.160646 containerd[1506]: time="2025-11-01T02:40:51.160606462Z" level=info msg="CreateContainer within sandbox \"16263c4774d4001ecab003b1bc223a3a568779cd87f5922241ec39f30399e84f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 02:40:51.186240 containerd[1506]: time="2025-11-01T02:40:51.186197353Z" level=info msg="CreateContainer within sandbox \"0f5c076132a1e836510a220e089ba5eed7edfb9aee04ad6f8e33c6dea49c4b8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa3069347d865e84f5108b82805a5758cafd20cd58398d5a8d580ce7f1713618\"" Nov 1 02:40:51.188473 containerd[1506]: time="2025-11-01T02:40:51.186994853Z" level=info msg="StartContainer for \"aa3069347d865e84f5108b82805a5758cafd20cd58398d5a8d580ce7f1713618\"" Nov 1 02:40:51.189599 containerd[1506]: time="2025-11-01T02:40:51.189564572Z" level=info msg="CreateContainer within sandbox \"1e811eef13b72f184687560390fe64672921942a1a918fd247f0959d88f3ee68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ca0c8422c158eda2387b9a8f696c7e3d334afe08f918054881abed9d06fa9d7\"" Nov 1 02:40:51.190419 containerd[1506]: time="2025-11-01T02:40:51.190389607Z" level=info msg="StartContainer for \"6ca0c8422c158eda2387b9a8f696c7e3d334afe08f918054881abed9d06fa9d7\"" Nov 1 02:40:51.191229 containerd[1506]: time="2025-11-01T02:40:51.191192639Z" level=info msg="CreateContainer within sandbox \"16263c4774d4001ecab003b1bc223a3a568779cd87f5922241ec39f30399e84f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bfe4b04dbff18e08c4d08c1d2d803b0df79455574f6bc4870cfbb7fab4385f4d\"" Nov 1 02:40:51.191728 containerd[1506]: time="2025-11-01T02:40:51.191685257Z" level=info msg="StartContainer for \"bfe4b04dbff18e08c4d08c1d2d803b0df79455574f6bc4870cfbb7fab4385f4d\"" Nov 1 02:40:51.241753 systemd[1]: Started cri-containerd-aa3069347d865e84f5108b82805a5758cafd20cd58398d5a8d580ce7f1713618.scope - libcontainer container aa3069347d865e84f5108b82805a5758cafd20cd58398d5a8d580ce7f1713618. Nov 1 02:40:51.258611 systemd[1]: Started cri-containerd-6ca0c8422c158eda2387b9a8f696c7e3d334afe08f918054881abed9d06fa9d7.scope - libcontainer container 6ca0c8422c158eda2387b9a8f696c7e3d334afe08f918054881abed9d06fa9d7. Nov 1 02:40:51.269032 systemd[1]: Started cri-containerd-bfe4b04dbff18e08c4d08c1d2d803b0df79455574f6bc4870cfbb7fab4385f4d.scope - libcontainer container bfe4b04dbff18e08c4d08c1d2d803b0df79455574f6bc4870cfbb7fab4385f4d. Nov 1 02:40:51.354015 containerd[1506]: time="2025-11-01T02:40:51.353957399Z" level=info msg="StartContainer for \"aa3069347d865e84f5108b82805a5758cafd20cd58398d5a8d580ce7f1713618\" returns successfully" Nov 1 02:40:51.384794 containerd[1506]: time="2025-11-01T02:40:51.384685095Z" level=info msg="StartContainer for \"bfe4b04dbff18e08c4d08c1d2d803b0df79455574f6bc4870cfbb7fab4385f4d\" returns successfully" Nov 1 02:40:51.402558 containerd[1506]: time="2025-11-01T02:40:51.402249031Z" level=info msg="StartContainer for \"6ca0c8422c158eda2387b9a8f696c7e3d334afe08f918054881abed9d06fa9d7\" returns successfully" Nov 1 02:40:51.490401 kubelet[2293]: E1101 02:40:51.489977 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:51.495913 kubelet[2293]: E1101 02:40:51.495073 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:51.498137 kubelet[2293]: E1101 02:40:51.498108 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:51.536346 kubelet[2293]: E1101 02:40:51.536294 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.26.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.26.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 02:40:52.501489 kubelet[2293]: E1101 02:40:52.501063 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:52.503776 kubelet[2293]: E1101 02:40:52.503590 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:52.668401 kubelet[2293]: I1101 02:40:52.668332 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:53.502828 kubelet[2293]: E1101 02:40:53.502787 2293 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-liqqm.gb1.brightbox.com\" not found" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.687960 kubelet[2293]: I1101 02:40:54.687825 2293 kubelet_node_status.go:78] "Successfully registered node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.687960 kubelet[2293]: E1101 02:40:54.687906 2293 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"srv-liqqm.gb1.brightbox.com\": node \"srv-liqqm.gb1.brightbox.com\" not found" Nov 1 02:40:54.728468 kubelet[2293]: I1101 02:40:54.726538 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.744628 kubelet[2293]: E1101 02:40:54.744380 2293 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-liqqm.gb1.brightbox.com.1873c1b1fbe1e39b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-liqqm.gb1.brightbox.com,UID:srv-liqqm.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-liqqm.gb1.brightbox.com,},FirstTimestamp:2025-11-01 02:40:49.411253147 +0000 UTC m=+0.724030653,LastTimestamp:2025-11-01 02:40:49.411253147 +0000 UTC m=+0.724030653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-liqqm.gb1.brightbox.com,}" Nov 1 02:40:54.760464 kubelet[2293]: E1101 02:40:54.760323 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.760464 kubelet[2293]: I1101 02:40:54.760378 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.768005 kubelet[2293]: E1101 02:40:54.767730 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-liqqm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.768005 kubelet[2293]: I1101 02:40:54.767776 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.772552 kubelet[2293]: E1101 02:40:54.772488 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:54.785248 kubelet[2293]: E1101 02:40:54.784785 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 1 02:40:55.028580 kubelet[2293]: I1101 02:40:55.027110 2293 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:55.035513 kubelet[2293]: E1101 02:40:55.035387 2293 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:55.399397 kubelet[2293]: I1101 02:40:55.399027 2293 apiserver.go:52] "Watching apiserver" Nov 1 02:40:55.427270 kubelet[2293]: I1101 02:40:55.427207 2293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 02:40:56.735992 systemd[1]: Reloading requested from client PID 2582 ('systemctl') (unit session-11.scope)... Nov 1 02:40:56.736753 systemd[1]: Reloading... Nov 1 02:40:56.871552 zram_generator::config[2621]: No configuration found. Nov 1 02:40:57.050925 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:40:57.183130 systemd[1]: Reloading finished in 445 ms. Nov 1 02:40:57.248792 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:57.267943 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 02:40:57.268390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:57.268538 systemd[1]: kubelet.service: Consumed 1.089s CPU time, 122.2M memory peak, 0B memory swap peak. Nov 1 02:40:57.273807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 02:40:57.510361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 02:40:57.521923 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 02:40:57.642461 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 02:40:57.643684 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 02:40:57.643684 kubelet[2684]: I1101 02:40:57.643547 2684 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 02:40:57.665876 kubelet[2684]: I1101 02:40:57.665832 2684 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 02:40:57.665876 kubelet[2684]: I1101 02:40:57.665870 2684 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 02:40:57.668767 kubelet[2684]: I1101 02:40:57.668644 2684 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 02:40:57.668767 kubelet[2684]: I1101 02:40:57.668675 2684 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 02:40:57.669244 kubelet[2684]: I1101 02:40:57.669012 2684 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 02:40:57.670992 kubelet[2684]: I1101 02:40:57.670957 2684 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 02:40:57.688062 kubelet[2684]: I1101 02:40:57.687024 2684 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 02:40:57.704664 kubelet[2684]: E1101 02:40:57.704570 2684 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 02:40:57.704912 kubelet[2684]: I1101 02:40:57.704748 2684 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 02:40:57.720990 kubelet[2684]: I1101 02:40:57.720953 2684 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 02:40:57.724791 kubelet[2684]: I1101 02:40:57.723458 2684 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 02:40:57.724791 kubelet[2684]: I1101 02:40:57.723514 2684 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-liqqm.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 02:40:57.724791 kubelet[2684]: I1101 02:40:57.723838 2684 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 02:40:57.724791 kubelet[2684]: I1101 02:40:57.723855 2684 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 02:40:57.725139 kubelet[2684]: I1101 02:40:57.723907 2684 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 02:40:57.728731 kubelet[2684]: I1101 02:40:57.728234 2684 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:40:57.729598 kubelet[2684]: I1101 02:40:57.729568 2684 kubelet.go:475] "Attempting to sync node with API server" Nov 1 02:40:57.731303 kubelet[2684]: I1101 02:40:57.731280 2684 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 02:40:57.731709 kubelet[2684]: I1101 02:40:57.731548 2684 kubelet.go:387] "Adding apiserver pod source" Nov 1 02:40:57.731709 kubelet[2684]: I1101 02:40:57.731611 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 02:40:57.767569 kubelet[2684]: I1101 02:40:57.766903 2684 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 02:40:57.769141 kubelet[2684]: I1101 02:40:57.768356 2684 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 02:40:57.769287 kubelet[2684]: I1101 02:40:57.769266 2684 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 02:40:57.787646 kubelet[2684]: I1101 02:40:57.785142 2684 server.go:1262] "Started kubelet" Nov 1 02:40:57.787646 kubelet[2684]: I1101 02:40:57.785508 2684 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 02:40:57.787646 kubelet[2684]: I1101 02:40:57.785653 2684 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 02:40:57.787646 kubelet[2684]: I1101 02:40:57.785712 2684 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 02:40:57.788023 kubelet[2684]: I1101 02:40:57.787996 2684 server.go:310] "Adding debug handlers to kubelet server" Nov 1 02:40:57.788240 kubelet[2684]: I1101 02:40:57.788214 2684 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 02:40:57.794191 kubelet[2684]: I1101 02:40:57.793009 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 02:40:57.814203 kubelet[2684]: I1101 02:40:57.813322 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 02:40:57.820086 kubelet[2684]: I1101 02:40:57.820010 2684 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 02:40:57.821762 kubelet[2684]: I1101 02:40:57.820966 2684 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 02:40:57.830031 kubelet[2684]: I1101 02:40:57.830003 2684 reconciler.go:29] "Reconciler: start to sync state" Nov 1 02:40:57.839508 kubelet[2684]: E1101 02:40:57.839466 2684 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 02:40:57.840186 kubelet[2684]: I1101 02:40:57.840160 2684 factory.go:223] Registration of the containerd container factory successfully Nov 1 02:40:57.840503 kubelet[2684]: I1101 02:40:57.840482 2684 factory.go:223] Registration of the systemd container factory successfully Nov 1 02:40:57.841483 kubelet[2684]: I1101 02:40:57.840727 2684 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 02:40:57.852411 kubelet[2684]: I1101 02:40:57.852367 2684 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 02:40:57.854045 kubelet[2684]: I1101 02:40:57.854021 2684 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 02:40:57.854208 kubelet[2684]: I1101 02:40:57.854189 2684 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 02:40:57.854563 kubelet[2684]: I1101 02:40:57.854531 2684 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 02:40:57.854758 kubelet[2684]: E1101 02:40:57.854721 2684 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 02:40:57.955098 kubelet[2684]: E1101 02:40:57.955051 2684 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 02:40:57.962866 kubelet[2684]: I1101 02:40:57.962477 2684 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 02:40:57.962866 kubelet[2684]: I1101 02:40:57.962500 2684 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 02:40:57.962866 kubelet[2684]: I1101 02:40:57.962566 2684 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:40:57.963472 kubelet[2684]: I1101 02:40:57.963274 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 02:40:57.963472 kubelet[2684]: I1101 02:40:57.963303 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 02:40:57.963472 kubelet[2684]: I1101 02:40:57.963337 2684 policy_none.go:49] "None policy: Start" Nov 1 02:40:57.963472 kubelet[2684]: I1101 02:40:57.963366 2684 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 02:40:57.963472 kubelet[2684]: I1101 02:40:57.963391 2684 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 02:40:57.964702 kubelet[2684]: I1101 02:40:57.963721 2684 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 02:40:57.964702 kubelet[2684]: I1101 02:40:57.963750 2684 policy_none.go:47] "Start" Nov 1 02:40:57.972208 kubelet[2684]: E1101 02:40:57.972182 2684 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 02:40:57.972962 kubelet[2684]: I1101 02:40:57.972939 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 02:40:57.973494 kubelet[2684]: I1101 02:40:57.973430 2684 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 02:40:57.974193 kubelet[2684]: I1101 02:40:57.974037 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 02:40:57.982382 kubelet[2684]: E1101 02:40:57.981823 2684 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 02:40:58.094713 kubelet[2684]: I1101 02:40:58.094675 2684 kubelet_node_status.go:75] "Attempting to register node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.108454 kubelet[2684]: I1101 02:40:58.107928 2684 kubelet_node_status.go:124] "Node was previously registered" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.108454 kubelet[2684]: I1101 02:40:58.108195 2684 kubelet_node_status.go:78] "Successfully registered node" node="srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.158214 kubelet[2684]: I1101 02:40:58.157307 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.158214 kubelet[2684]: I1101 02:40:58.157889 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.167982 kubelet[2684]: I1101 02:40:58.167935 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.183303 kubelet[2684]: I1101 02:40:58.183106 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 02:40:58.183303 kubelet[2684]: I1101 02:40:58.183188 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 02:40:58.192352 kubelet[2684]: I1101 02:40:58.192196 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 02:40:58.233919 kubelet[2684]: I1101 02:40:58.233856 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9f8a74bcd121718911506fdcf1b4639-k8s-certs\") pod \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" (UID: \"d9f8a74bcd121718911506fdcf1b4639\") " pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.234425 kubelet[2684]: I1101 02:40:58.234203 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9f8a74bcd121718911506fdcf1b4639-usr-share-ca-certificates\") pod \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" (UID: \"d9f8a74bcd121718911506fdcf1b4639\") " pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.234425 kubelet[2684]: I1101 02:40:58.234364 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-ca-certs\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.234425 kubelet[2684]: I1101 02:40:58.234398 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-kubeconfig\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.235011 kubelet[2684]: I1101 02:40:58.234733 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.235011 kubelet[2684]: I1101 02:40:58.234793 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/761b46ce54a5f0a05f9809f7f737f171-kubeconfig\") pod \"kube-scheduler-srv-liqqm.gb1.brightbox.com\" (UID: \"761b46ce54a5f0a05f9809f7f737f171\") " pod="kube-system/kube-scheduler-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.235011 kubelet[2684]: I1101 02:40:58.234820 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9f8a74bcd121718911506fdcf1b4639-ca-certs\") pod \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" (UID: \"d9f8a74bcd121718911506fdcf1b4639\") " pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.235011 kubelet[2684]: I1101 02:40:58.234867 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-flexvolume-dir\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.235011 kubelet[2684]: I1101 02:40:58.234962 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f59da9f7e8eb2222d069dc409a645043-k8s-certs\") pod \"kube-controller-manager-srv-liqqm.gb1.brightbox.com\" (UID: \"f59da9f7e8eb2222d069dc409a645043\") " pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.748654 kubelet[2684]: I1101 02:40:58.748499 2684 apiserver.go:52] "Watching apiserver" Nov 1 02:40:58.823367 kubelet[2684]: I1101 02:40:58.823288 2684 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 02:40:58.903488 kubelet[2684]: I1101 02:40:58.901609 2684 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.913980 kubelet[2684]: I1101 02:40:58.913646 2684 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 02:40:58.913980 kubelet[2684]: E1101 02:40:58.913752 2684 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-liqqm.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" Nov 1 02:40:58.961952 kubelet[2684]: I1101 02:40:58.961836 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-liqqm.gb1.brightbox.com" podStartSLOduration=0.961807355 podStartE2EDuration="961.807355ms" podCreationTimestamp="2025-11-01 02:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:40:58.960492525 +0000 UTC m=+1.408262762" watchObservedRunningTime="2025-11-01 02:40:58.961807355 +0000 UTC m=+1.409577576" Nov 1 02:40:58.977497 kubelet[2684]: I1101 02:40:58.976901 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-liqqm.gb1.brightbox.com" podStartSLOduration=0.976881196 podStartE2EDuration="976.881196ms" podCreationTimestamp="2025-11-01 02:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:40:58.9747166 +0000 UTC m=+1.422486829" watchObservedRunningTime="2025-11-01 02:40:58.976881196 +0000 UTC m=+1.424651417" Nov 1 02:40:58.995681 kubelet[2684]: I1101 02:40:58.995603 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-liqqm.gb1.brightbox.com" podStartSLOduration=0.995551191 podStartE2EDuration="995.551191ms" podCreationTimestamp="2025-11-01 02:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:40:58.993250663 +0000 UTC m=+1.441020888" watchObservedRunningTime="2025-11-01 02:40:58.995551191 +0000 UTC m=+1.443321421" Nov 1 02:41:02.750532 kubelet[2684]: I1101 02:41:02.750281 2684 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 02:41:02.754099 containerd[1506]: time="2025-11-01T02:41:02.753991180Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 02:41:02.755892 kubelet[2684]: I1101 02:41:02.755070 2684 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 02:41:03.437122 systemd[1]: Created slice kubepods-besteffort-pod9fdf85c8_697e_4626_94ee_99b039f4889b.slice - libcontainer container kubepods-besteffort-pod9fdf85c8_697e_4626_94ee_99b039f4889b.slice. Nov 1 02:41:03.475006 kubelet[2684]: I1101 02:41:03.474742 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvdm\" (UniqueName: \"kubernetes.io/projected/9fdf85c8-697e-4626-94ee-99b039f4889b-kube-api-access-fbvdm\") pod \"kube-proxy-46vcz\" (UID: \"9fdf85c8-697e-4626-94ee-99b039f4889b\") " pod="kube-system/kube-proxy-46vcz" Nov 1 02:41:03.475006 kubelet[2684]: I1101 02:41:03.474806 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9fdf85c8-697e-4626-94ee-99b039f4889b-kube-proxy\") pod \"kube-proxy-46vcz\" (UID: \"9fdf85c8-697e-4626-94ee-99b039f4889b\") " pod="kube-system/kube-proxy-46vcz" Nov 1 02:41:03.475006 kubelet[2684]: I1101 02:41:03.474838 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fdf85c8-697e-4626-94ee-99b039f4889b-xtables-lock\") pod \"kube-proxy-46vcz\" (UID: \"9fdf85c8-697e-4626-94ee-99b039f4889b\") " pod="kube-system/kube-proxy-46vcz" Nov 1 02:41:03.475006 kubelet[2684]: I1101 02:41:03.474861 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fdf85c8-697e-4626-94ee-99b039f4889b-lib-modules\") pod \"kube-proxy-46vcz\" (UID: \"9fdf85c8-697e-4626-94ee-99b039f4889b\") " pod="kube-system/kube-proxy-46vcz" Nov 1 02:41:03.758860 containerd[1506]: time="2025-11-01T02:41:03.758686751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46vcz,Uid:9fdf85c8-697e-4626-94ee-99b039f4889b,Namespace:kube-system,Attempt:0,}" Nov 1 02:41:03.815481 containerd[1506]: time="2025-11-01T02:41:03.813800836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:03.815481 containerd[1506]: time="2025-11-01T02:41:03.813924063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:03.815481 containerd[1506]: time="2025-11-01T02:41:03.813942628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:03.815481 containerd[1506]: time="2025-11-01T02:41:03.814308342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:03.875716 systemd[1]: Started cri-containerd-5036860c7b1fbb955118dab5c94590646913af40d8eb14ba25495ca16807765b.scope - libcontainer container 5036860c7b1fbb955118dab5c94590646913af40d8eb14ba25495ca16807765b. Nov 1 02:41:03.961776 systemd[1]: Created slice kubepods-besteffort-pod749d5bd1_d2b0_41e5_bd94_a0845cd59a37.slice - libcontainer container kubepods-besteffort-pod749d5bd1_d2b0_41e5_bd94_a0845cd59a37.slice. Nov 1 02:41:03.976849 containerd[1506]: time="2025-11-01T02:41:03.976700895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-46vcz,Uid:9fdf85c8-697e-4626-94ee-99b039f4889b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5036860c7b1fbb955118dab5c94590646913af40d8eb14ba25495ca16807765b\"" Nov 1 02:41:03.981058 kubelet[2684]: I1101 02:41:03.980864 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpx6r\" (UniqueName: \"kubernetes.io/projected/749d5bd1-d2b0-41e5-bd94-a0845cd59a37-kube-api-access-fpx6r\") pod \"tigera-operator-65cdcdfd6d-w5j9x\" (UID: \"749d5bd1-d2b0-41e5-bd94-a0845cd59a37\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-w5j9x" Nov 1 02:41:03.981058 kubelet[2684]: I1101 02:41:03.980929 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/749d5bd1-d2b0-41e5-bd94-a0845cd59a37-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-w5j9x\" (UID: \"749d5bd1-d2b0-41e5-bd94-a0845cd59a37\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-w5j9x" Nov 1 02:41:03.991388 containerd[1506]: time="2025-11-01T02:41:03.991223201Z" level=info msg="CreateContainer within sandbox \"5036860c7b1fbb955118dab5c94590646913af40d8eb14ba25495ca16807765b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 02:41:04.038932 containerd[1506]: time="2025-11-01T02:41:04.037974190Z" level=info msg="CreateContainer within sandbox \"5036860c7b1fbb955118dab5c94590646913af40d8eb14ba25495ca16807765b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3056a0c648caf6bb83ebf96458a2cce76e6e92261d3c650533b666ba89e6dcd\"" Nov 1 02:41:04.040296 containerd[1506]: time="2025-11-01T02:41:04.039968218Z" level=info msg="StartContainer for \"e3056a0c648caf6bb83ebf96458a2cce76e6e92261d3c650533b666ba89e6dcd\"" Nov 1 02:41:04.082246 systemd[1]: Started cri-containerd-e3056a0c648caf6bb83ebf96458a2cce76e6e92261d3c650533b666ba89e6dcd.scope - libcontainer container e3056a0c648caf6bb83ebf96458a2cce76e6e92261d3c650533b666ba89e6dcd. Nov 1 02:41:04.140538 containerd[1506]: time="2025-11-01T02:41:04.140460907Z" level=info msg="StartContainer for \"e3056a0c648caf6bb83ebf96458a2cce76e6e92261d3c650533b666ba89e6dcd\" returns successfully" Nov 1 02:41:04.273546 containerd[1506]: time="2025-11-01T02:41:04.272954905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-w5j9x,Uid:749d5bd1-d2b0-41e5-bd94-a0845cd59a37,Namespace:tigera-operator,Attempt:0,}" Nov 1 02:41:04.321004 containerd[1506]: time="2025-11-01T02:41:04.319654569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:04.321004 containerd[1506]: time="2025-11-01T02:41:04.319754751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:04.321004 containerd[1506]: time="2025-11-01T02:41:04.319787283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:04.321004 containerd[1506]: time="2025-11-01T02:41:04.319894407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:04.352629 systemd[1]: Started cri-containerd-6c23fa29f4d912a15b05c11f4f497a55ba8e9f6e8708fadeb951234a0a22c50b.scope - libcontainer container 6c23fa29f4d912a15b05c11f4f497a55ba8e9f6e8708fadeb951234a0a22c50b. Nov 1 02:41:04.448960 containerd[1506]: time="2025-11-01T02:41:04.448904999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-w5j9x,Uid:749d5bd1-d2b0-41e5-bd94-a0845cd59a37,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6c23fa29f4d912a15b05c11f4f497a55ba8e9f6e8708fadeb951234a0a22c50b\"" Nov 1 02:41:04.470178 containerd[1506]: time="2025-11-01T02:41:04.469689158Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 02:41:04.952271 kubelet[2684]: I1101 02:41:04.949991 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-46vcz" podStartSLOduration=1.949933198 podStartE2EDuration="1.949933198s" podCreationTimestamp="2025-11-01 02:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:41:04.93810902 +0000 UTC m=+7.385879250" watchObservedRunningTime="2025-11-01 02:41:04.949933198 +0000 UTC m=+7.397703424" Nov 1 02:41:05.962292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4134772611.mount: Deactivated successfully. Nov 1 02:41:07.320811 containerd[1506]: time="2025-11-01T02:41:07.320683680Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:07.322717 containerd[1506]: time="2025-11-01T02:41:07.322387821Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 02:41:07.324487 containerd[1506]: time="2025-11-01T02:41:07.323520001Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:07.327483 containerd[1506]: time="2025-11-01T02:41:07.326623589Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:07.328236 containerd[1506]: time="2025-11-01T02:41:07.327882557Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.858114703s" Nov 1 02:41:07.328236 containerd[1506]: time="2025-11-01T02:41:07.327935554Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 02:41:07.340976 containerd[1506]: time="2025-11-01T02:41:07.340709291Z" level=info msg="CreateContainer within sandbox \"6c23fa29f4d912a15b05c11f4f497a55ba8e9f6e8708fadeb951234a0a22c50b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 02:41:07.357590 containerd[1506]: time="2025-11-01T02:41:07.357503448Z" level=info msg="CreateContainer within sandbox \"6c23fa29f4d912a15b05c11f4f497a55ba8e9f6e8708fadeb951234a0a22c50b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325\"" Nov 1 02:41:07.360950 containerd[1506]: time="2025-11-01T02:41:07.360540870Z" level=info msg="StartContainer for \"03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325\"" Nov 1 02:41:07.412889 systemd[1]: Started cri-containerd-03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325.scope - libcontainer container 03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325. Nov 1 02:41:07.463757 containerd[1506]: time="2025-11-01T02:41:07.463708020Z" level=info msg="StartContainer for \"03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325\" returns successfully" Nov 1 02:41:07.944783 kubelet[2684]: I1101 02:41:07.944358 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-w5j9x" podStartSLOduration=2.068606365 podStartE2EDuration="4.944332923s" podCreationTimestamp="2025-11-01 02:41:03 +0000 UTC" firstStartedPulling="2025-11-01 02:41:04.456265593 +0000 UTC m=+6.904035816" lastFinishedPulling="2025-11-01 02:41:07.331992134 +0000 UTC m=+9.779762374" observedRunningTime="2025-11-01 02:41:07.944164224 +0000 UTC m=+10.391934473" watchObservedRunningTime="2025-11-01 02:41:07.944332923 +0000 UTC m=+10.392103152" Nov 1 02:41:11.510556 systemd[1]: cri-containerd-03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325.scope: Deactivated successfully. Nov 1 02:41:11.625239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325-rootfs.mount: Deactivated successfully. Nov 1 02:41:11.714882 containerd[1506]: time="2025-11-01T02:41:11.632030062Z" level=info msg="shim disconnected" id=03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325 namespace=k8s.io Nov 1 02:41:11.714882 containerd[1506]: time="2025-11-01T02:41:11.714608159Z" level=warning msg="cleaning up after shim disconnected" id=03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325 namespace=k8s.io Nov 1 02:41:11.714882 containerd[1506]: time="2025-11-01T02:41:11.714652414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 02:41:11.948363 kubelet[2684]: I1101 02:41:11.947491 2684 scope.go:117] "RemoveContainer" containerID="03f7b189c057722ea0581b20816801c7d0991ca2aeae6302d5a5504e5eb37325" Nov 1 02:41:11.951883 containerd[1506]: time="2025-11-01T02:41:11.951830945Z" level=info msg="CreateContainer within sandbox \"6c23fa29f4d912a15b05c11f4f497a55ba8e9f6e8708fadeb951234a0a22c50b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 1 02:41:11.973949 containerd[1506]: time="2025-11-01T02:41:11.973854924Z" level=info msg="CreateContainer within sandbox \"6c23fa29f4d912a15b05c11f4f497a55ba8e9f6e8708fadeb951234a0a22c50b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cb0e056c40bc8dac28b894665c101e99cf8f94aaa9c755b92a66281454ecfab4\"" Nov 1 02:41:11.974931 containerd[1506]: time="2025-11-01T02:41:11.974891329Z" level=info msg="StartContainer for \"cb0e056c40bc8dac28b894665c101e99cf8f94aaa9c755b92a66281454ecfab4\"" Nov 1 02:41:12.074843 systemd[1]: Started cri-containerd-cb0e056c40bc8dac28b894665c101e99cf8f94aaa9c755b92a66281454ecfab4.scope - libcontainer container cb0e056c40bc8dac28b894665c101e99cf8f94aaa9c755b92a66281454ecfab4. Nov 1 02:41:12.374906 containerd[1506]: time="2025-11-01T02:41:12.374782203Z" level=info msg="StartContainer for \"cb0e056c40bc8dac28b894665c101e99cf8f94aaa9c755b92a66281454ecfab4\" returns successfully" Nov 1 02:41:15.579343 sudo[1778]: pam_unix(sudo:session): session closed for user root Nov 1 02:41:15.726860 sshd[1775]: pam_unix(sshd:session): session closed for user core Nov 1 02:41:15.735193 systemd[1]: sshd@8-10.230.26.18:22-147.75.109.163:60468.service: Deactivated successfully. Nov 1 02:41:15.738602 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 02:41:15.739055 systemd[1]: session-11.scope: Consumed 6.775s CPU time, 146.1M memory peak, 0B memory swap peak. Nov 1 02:41:15.741248 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Nov 1 02:41:15.743363 systemd-logind[1490]: Removed session 11. Nov 1 02:41:24.371403 systemd[1]: Created slice kubepods-besteffort-podcd981b22_5f31_4d43_861a_0ac3fc04640d.slice - libcontainer container kubepods-besteffort-podcd981b22_5f31_4d43_861a_0ac3fc04640d.slice. Nov 1 02:41:24.450548 kubelet[2684]: I1101 02:41:24.450272 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd981b22-5f31-4d43-861a-0ac3fc04640d-tigera-ca-bundle\") pod \"calico-typha-6c7cc68d44-flk2q\" (UID: \"cd981b22-5f31-4d43-861a-0ac3fc04640d\") " pod="calico-system/calico-typha-6c7cc68d44-flk2q" Nov 1 02:41:24.450548 kubelet[2684]: I1101 02:41:24.450360 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrd7x\" (UniqueName: \"kubernetes.io/projected/cd981b22-5f31-4d43-861a-0ac3fc04640d-kube-api-access-vrd7x\") pod \"calico-typha-6c7cc68d44-flk2q\" (UID: \"cd981b22-5f31-4d43-861a-0ac3fc04640d\") " pod="calico-system/calico-typha-6c7cc68d44-flk2q" Nov 1 02:41:24.450548 kubelet[2684]: I1101 02:41:24.450402 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cd981b22-5f31-4d43-861a-0ac3fc04640d-typha-certs\") pod \"calico-typha-6c7cc68d44-flk2q\" (UID: \"cd981b22-5f31-4d43-861a-0ac3fc04640d\") " pod="calico-system/calico-typha-6c7cc68d44-flk2q" Nov 1 02:41:24.640810 systemd[1]: Created slice kubepods-besteffort-pod4f35c5fc_59f6_44c3_8a15_53b76e5daaa0.slice - libcontainer container kubepods-besteffort-pod4f35c5fc_59f6_44c3_8a15_53b76e5daaa0.slice. Nov 1 02:41:24.683739 containerd[1506]: time="2025-11-01T02:41:24.683568284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c7cc68d44-flk2q,Uid:cd981b22-5f31-4d43-861a-0ac3fc04640d,Namespace:calico-system,Attempt:0,}" Nov 1 02:41:24.739780 containerd[1506]: time="2025-11-01T02:41:24.738114735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:24.740393 containerd[1506]: time="2025-11-01T02:41:24.738235745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:24.740393 containerd[1506]: time="2025-11-01T02:41:24.739517089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:24.740393 containerd[1506]: time="2025-11-01T02:41:24.739674736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:24.756941 kubelet[2684]: I1101 02:41:24.756869 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-cni-log-dir\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.756941 kubelet[2684]: I1101 02:41:24.756944 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-cni-bin-dir\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.757246 kubelet[2684]: I1101 02:41:24.756974 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-flexvol-driver-host\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.757246 kubelet[2684]: I1101 02:41:24.757017 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-node-certs\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.757246 kubelet[2684]: I1101 02:41:24.757043 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-lib-modules\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.757246 kubelet[2684]: I1101 02:41:24.757067 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-policysync\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.757246 kubelet[2684]: I1101 02:41:24.757097 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-xtables-lock\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.758808 kubelet[2684]: I1101 02:41:24.757133 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-var-run-calico\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.758808 kubelet[2684]: I1101 02:41:24.757163 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-cni-net-dir\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.758808 kubelet[2684]: I1101 02:41:24.757193 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-var-lib-calico\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.758808 kubelet[2684]: I1101 02:41:24.757221 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-tigera-ca-bundle\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.758808 kubelet[2684]: I1101 02:41:24.757262 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vplx9\" (UniqueName: \"kubernetes.io/projected/4f35c5fc-59f6-44c3-8a15-53b76e5daaa0-kube-api-access-vplx9\") pod \"calico-node-j6r7f\" (UID: \"4f35c5fc-59f6-44c3-8a15-53b76e5daaa0\") " pod="calico-system/calico-node-j6r7f" Nov 1 02:41:24.788734 systemd[1]: Started cri-containerd-5ffb629ac95d9f89ccb0755bd7564e2a4b3a22d9c525e4d3fdee7950b9c21c41.scope - libcontainer container 5ffb629ac95d9f89ccb0755bd7564e2a4b3a22d9c525e4d3fdee7950b9c21c41. Nov 1 02:41:24.803867 kubelet[2684]: E1101 02:41:24.802816 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:24.858835 kubelet[2684]: I1101 02:41:24.858404 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2db811c5-1134-445e-9e39-ac0e7ee1b427-varrun\") pod \"csi-node-driver-gvm6v\" (UID: \"2db811c5-1134-445e-9e39-ac0e7ee1b427\") " pod="calico-system/csi-node-driver-gvm6v" Nov 1 02:41:24.862971 kubelet[2684]: I1101 02:41:24.862832 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97jcd\" (UniqueName: \"kubernetes.io/projected/2db811c5-1134-445e-9e39-ac0e7ee1b427-kube-api-access-97jcd\") pod \"csi-node-driver-gvm6v\" (UID: \"2db811c5-1134-445e-9e39-ac0e7ee1b427\") " pod="calico-system/csi-node-driver-gvm6v" Nov 1 02:41:24.862971 kubelet[2684]: I1101 02:41:24.862948 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2db811c5-1134-445e-9e39-ac0e7ee1b427-kubelet-dir\") pod \"csi-node-driver-gvm6v\" (UID: \"2db811c5-1134-445e-9e39-ac0e7ee1b427\") " pod="calico-system/csi-node-driver-gvm6v" Nov 1 02:41:24.863181 kubelet[2684]: I1101 02:41:24.862995 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2db811c5-1134-445e-9e39-ac0e7ee1b427-socket-dir\") pod \"csi-node-driver-gvm6v\" (UID: \"2db811c5-1134-445e-9e39-ac0e7ee1b427\") " pod="calico-system/csi-node-driver-gvm6v" Nov 1 02:41:24.867518 kubelet[2684]: E1101 02:41:24.866604 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.867518 kubelet[2684]: W1101 02:41:24.866650 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.867518 kubelet[2684]: E1101 02:41:24.866711 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.868619 kubelet[2684]: E1101 02:41:24.868495 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.869226 kubelet[2684]: W1101 02:41:24.869054 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.869226 kubelet[2684]: E1101 02:41:24.869084 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.869920 kubelet[2684]: E1101 02:41:24.869801 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.869920 kubelet[2684]: W1101 02:41:24.869820 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.869920 kubelet[2684]: E1101 02:41:24.869853 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.870711 kubelet[2684]: E1101 02:41:24.870691 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.871026 kubelet[2684]: W1101 02:41:24.870862 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.871026 kubelet[2684]: E1101 02:41:24.870890 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.871643 kubelet[2684]: E1101 02:41:24.871509 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.871643 kubelet[2684]: W1101 02:41:24.871570 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.871643 kubelet[2684]: E1101 02:41:24.871586 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.873128 kubelet[2684]: E1101 02:41:24.872876 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.873128 kubelet[2684]: W1101 02:41:24.872895 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.873128 kubelet[2684]: E1101 02:41:24.872911 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.873969 kubelet[2684]: E1101 02:41:24.873371 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.874117 kubelet[2684]: W1101 02:41:24.874093 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.874229 kubelet[2684]: E1101 02:41:24.874208 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.875215 kubelet[2684]: E1101 02:41:24.874798 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.875215 kubelet[2684]: W1101 02:41:24.874817 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.875215 kubelet[2684]: E1101 02:41:24.874846 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.875851 kubelet[2684]: E1101 02:41:24.875803 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.876603 kubelet[2684]: W1101 02:41:24.876264 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.876603 kubelet[2684]: E1101 02:41:24.876293 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.877809 kubelet[2684]: E1101 02:41:24.877629 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.877809 kubelet[2684]: W1101 02:41:24.877649 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.877809 kubelet[2684]: E1101 02:41:24.877762 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.879379 kubelet[2684]: E1101 02:41:24.879339 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.879624 kubelet[2684]: W1101 02:41:24.879526 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.879624 kubelet[2684]: E1101 02:41:24.879554 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.880411 kubelet[2684]: E1101 02:41:24.880240 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.880411 kubelet[2684]: W1101 02:41:24.880259 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.880411 kubelet[2684]: E1101 02:41:24.880277 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.883726 kubelet[2684]: E1101 02:41:24.883409 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.883726 kubelet[2684]: W1101 02:41:24.883466 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.883726 kubelet[2684]: E1101 02:41:24.883489 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.884587 kubelet[2684]: E1101 02:41:24.884560 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.885018 kubelet[2684]: W1101 02:41:24.884777 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.885018 kubelet[2684]: E1101 02:41:24.884804 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.886423 kubelet[2684]: E1101 02:41:24.885275 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.886977 kubelet[2684]: W1101 02:41:24.886951 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.887237 kubelet[2684]: E1101 02:41:24.887211 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.887780 kubelet[2684]: I1101 02:41:24.885958 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2db811c5-1134-445e-9e39-ac0e7ee1b427-registration-dir\") pod \"csi-node-driver-gvm6v\" (UID: \"2db811c5-1134-445e-9e39-ac0e7ee1b427\") " pod="calico-system/csi-node-driver-gvm6v" Nov 1 02:41:24.888252 kubelet[2684]: E1101 02:41:24.888120 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.888567 kubelet[2684]: W1101 02:41:24.888423 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.889039 kubelet[2684]: E1101 02:41:24.888978 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.892693 kubelet[2684]: E1101 02:41:24.890410 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.892693 kubelet[2684]: W1101 02:41:24.890430 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.892693 kubelet[2684]: E1101 02:41:24.892483 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.893287 kubelet[2684]: E1101 02:41:24.893068 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.893287 kubelet[2684]: W1101 02:41:24.893089 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.893287 kubelet[2684]: E1101 02:41:24.893106 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.895372 kubelet[2684]: E1101 02:41:24.893633 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.895372 kubelet[2684]: W1101 02:41:24.893652 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.895372 kubelet[2684]: E1101 02:41:24.893669 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.896302 kubelet[2684]: E1101 02:41:24.896088 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.896302 kubelet[2684]: W1101 02:41:24.896109 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.896302 kubelet[2684]: E1101 02:41:24.896126 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.896738 kubelet[2684]: E1101 02:41:24.896718 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.896961 kubelet[2684]: W1101 02:41:24.896840 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.896961 kubelet[2684]: E1101 02:41:24.896865 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.903072 kubelet[2684]: E1101 02:41:24.902780 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.903072 kubelet[2684]: W1101 02:41:24.902807 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.903072 kubelet[2684]: E1101 02:41:24.902845 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.956382 kubelet[2684]: E1101 02:41:24.956265 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.956382 kubelet[2684]: W1101 02:41:24.956297 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.956382 kubelet[2684]: E1101 02:41:24.956324 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.989586 kubelet[2684]: E1101 02:41:24.989471 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.990101 kubelet[2684]: W1101 02:41:24.989503 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.990101 kubelet[2684]: E1101 02:41:24.989907 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.990671 kubelet[2684]: E1101 02:41:24.990598 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.990990 kubelet[2684]: W1101 02:41:24.990739 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.990990 kubelet[2684]: E1101 02:41:24.990763 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.991967 kubelet[2684]: E1101 02:41:24.991682 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.992241 kubelet[2684]: W1101 02:41:24.992076 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.992241 kubelet[2684]: E1101 02:41:24.992105 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.993318 kubelet[2684]: E1101 02:41:24.992794 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.993704 kubelet[2684]: W1101 02:41:24.993429 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.993704 kubelet[2684]: E1101 02:41:24.993530 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.995394 kubelet[2684]: E1101 02:41:24.994908 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.995394 kubelet[2684]: W1101 02:41:24.994932 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.995394 kubelet[2684]: E1101 02:41:24.994949 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.995740 kubelet[2684]: E1101 02:41:24.995720 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.996007 kubelet[2684]: W1101 02:41:24.995985 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.996199 kubelet[2684]: E1101 02:41:24.996126 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.997007 kubelet[2684]: E1101 02:41:24.996837 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.997007 kubelet[2684]: W1101 02:41:24.996858 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.997007 kubelet[2684]: E1101 02:41:24.996875 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.997595 kubelet[2684]: E1101 02:41:24.997419 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.997595 kubelet[2684]: W1101 02:41:24.997462 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.997595 kubelet[2684]: E1101 02:41:24.997482 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.998838 kubelet[2684]: E1101 02:41:24.998631 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.998838 kubelet[2684]: W1101 02:41:24.998655 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.998838 kubelet[2684]: E1101 02:41:24.998673 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.999152 kubelet[2684]: E1101 02:41:24.999131 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.999292 kubelet[2684]: W1101 02:41:24.999256 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.999411 kubelet[2684]: E1101 02:41:24.999389 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:24.999958 kubelet[2684]: E1101 02:41:24.999788 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:24.999958 kubelet[2684]: W1101 02:41:24.999809 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:24.999958 kubelet[2684]: E1101 02:41:24.999837 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.000219 kubelet[2684]: E1101 02:41:25.000199 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.000379 kubelet[2684]: W1101 02:41:25.000316 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.001092 kubelet[2684]: E1101 02:41:25.000883 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.001898 kubelet[2684]: E1101 02:41:25.001613 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.001898 kubelet[2684]: W1101 02:41:25.001632 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.001898 kubelet[2684]: E1101 02:41:25.001648 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.003306 kubelet[2684]: E1101 02:41:25.003121 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.003306 kubelet[2684]: W1101 02:41:25.003142 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.003306 kubelet[2684]: E1101 02:41:25.003159 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.003868 kubelet[2684]: E1101 02:41:25.003848 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.004030 kubelet[2684]: W1101 02:41:25.003988 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.004203 kubelet[2684]: E1101 02:41:25.004159 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.005040 kubelet[2684]: E1101 02:41:25.005019 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.005344 kubelet[2684]: W1101 02:41:25.005202 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.005344 kubelet[2684]: E1101 02:41:25.005228 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.006145 containerd[1506]: time="2025-11-01T02:41:25.005799431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c7cc68d44-flk2q,Uid:cd981b22-5f31-4d43-861a-0ac3fc04640d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ffb629ac95d9f89ccb0755bd7564e2a4b3a22d9c525e4d3fdee7950b9c21c41\"" Nov 1 02:41:25.006608 kubelet[2684]: E1101 02:41:25.005967 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.006608 kubelet[2684]: W1101 02:41:25.005983 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.006608 kubelet[2684]: E1101 02:41:25.005998 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.007085 kubelet[2684]: E1101 02:41:25.006937 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.007085 kubelet[2684]: W1101 02:41:25.007018 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.007085 kubelet[2684]: E1101 02:41:25.007034 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.008056 kubelet[2684]: E1101 02:41:25.007898 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.008056 kubelet[2684]: W1101 02:41:25.007917 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.008056 kubelet[2684]: E1101 02:41:25.007933 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.008378 containerd[1506]: time="2025-11-01T02:41:25.008307043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 02:41:25.008956 kubelet[2684]: E1101 02:41:25.008877 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.008956 kubelet[2684]: W1101 02:41:25.008897 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.008956 kubelet[2684]: E1101 02:41:25.008913 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.009804 kubelet[2684]: E1101 02:41:25.009718 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.009804 kubelet[2684]: W1101 02:41:25.009738 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.009804 kubelet[2684]: E1101 02:41:25.009755 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.012330 kubelet[2684]: E1101 02:41:25.012304 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.012466 kubelet[2684]: W1101 02:41:25.012330 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.012466 kubelet[2684]: E1101 02:41:25.012349 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.013086 kubelet[2684]: E1101 02:41:25.013064 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.013086 kubelet[2684]: W1101 02:41:25.013086 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.013208 kubelet[2684]: E1101 02:41:25.013103 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.015094 kubelet[2684]: E1101 02:41:25.015057 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.015094 kubelet[2684]: W1101 02:41:25.015084 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.015224 kubelet[2684]: E1101 02:41:25.015103 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.015755 kubelet[2684]: E1101 02:41:25.015722 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.015755 kubelet[2684]: W1101 02:41:25.015744 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.015880 kubelet[2684]: E1101 02:41:25.015761 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.037503 kubelet[2684]: E1101 02:41:25.037349 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:25.037503 kubelet[2684]: W1101 02:41:25.037379 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:25.037503 kubelet[2684]: E1101 02:41:25.037404 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:25.259856 containerd[1506]: time="2025-11-01T02:41:25.257519754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j6r7f,Uid:4f35c5fc-59f6-44c3-8a15-53b76e5daaa0,Namespace:calico-system,Attempt:0,}" Nov 1 02:41:25.311299 containerd[1506]: time="2025-11-01T02:41:25.310886943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:25.311299 containerd[1506]: time="2025-11-01T02:41:25.310980705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:25.311299 containerd[1506]: time="2025-11-01T02:41:25.311006471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:25.311299 containerd[1506]: time="2025-11-01T02:41:25.311147479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:25.347755 systemd[1]: Started cri-containerd-8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c.scope - libcontainer container 8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c. Nov 1 02:41:25.385975 containerd[1506]: time="2025-11-01T02:41:25.385920994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j6r7f,Uid:4f35c5fc-59f6-44c3-8a15-53b76e5daaa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c\"" Nov 1 02:41:25.857396 kubelet[2684]: E1101 02:41:25.855878 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:26.565671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1348663472.mount: Deactivated successfully. Nov 1 02:41:27.857490 kubelet[2684]: E1101 02:41:27.856226 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:28.485820 containerd[1506]: time="2025-11-01T02:41:28.485612858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:28.488015 containerd[1506]: time="2025-11-01T02:41:28.487967755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 02:41:28.488671 containerd[1506]: time="2025-11-01T02:41:28.488638427Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:28.491519 containerd[1506]: time="2025-11-01T02:41:28.491479516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:28.492807 containerd[1506]: time="2025-11-01T02:41:28.492768977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.48440921s" Nov 1 02:41:28.492969 containerd[1506]: time="2025-11-01T02:41:28.492939827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 02:41:28.495190 containerd[1506]: time="2025-11-01T02:41:28.495157746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 02:41:28.534264 containerd[1506]: time="2025-11-01T02:41:28.533827676Z" level=info msg="CreateContainer within sandbox \"5ffb629ac95d9f89ccb0755bd7564e2a4b3a22d9c525e4d3fdee7950b9c21c41\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 02:41:28.553532 containerd[1506]: time="2025-11-01T02:41:28.552196993Z" level=info msg="CreateContainer within sandbox \"5ffb629ac95d9f89ccb0755bd7564e2a4b3a22d9c525e4d3fdee7950b9c21c41\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d779a499c4a242b0c246c513246d68195570ba5ba628de249dc290188729015\"" Nov 1 02:41:28.553210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307526919.mount: Deactivated successfully. Nov 1 02:41:28.555656 containerd[1506]: time="2025-11-01T02:41:28.553613211Z" level=info msg="StartContainer for \"1d779a499c4a242b0c246c513246d68195570ba5ba628de249dc290188729015\"" Nov 1 02:41:28.624158 systemd[1]: Started cri-containerd-1d779a499c4a242b0c246c513246d68195570ba5ba628de249dc290188729015.scope - libcontainer container 1d779a499c4a242b0c246c513246d68195570ba5ba628de249dc290188729015. Nov 1 02:41:28.726033 containerd[1506]: time="2025-11-01T02:41:28.725976708Z" level=info msg="StartContainer for \"1d779a499c4a242b0c246c513246d68195570ba5ba628de249dc290188729015\" returns successfully" Nov 1 02:41:29.055821 kubelet[2684]: E1101 02:41:29.055529 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.055821 kubelet[2684]: W1101 02:41:29.055596 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.055821 kubelet[2684]: E1101 02:41:29.055636 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.057232 kubelet[2684]: E1101 02:41:29.056024 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.057232 kubelet[2684]: W1101 02:41:29.056038 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.057232 kubelet[2684]: E1101 02:41:29.056054 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.058976 kubelet[2684]: E1101 02:41:29.058253 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.058976 kubelet[2684]: W1101 02:41:29.058273 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.058976 kubelet[2684]: E1101 02:41:29.058291 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.058976 kubelet[2684]: E1101 02:41:29.058765 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.058976 kubelet[2684]: W1101 02:41:29.058784 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.058976 kubelet[2684]: E1101 02:41:29.058800 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.060179 kubelet[2684]: E1101 02:41:29.059655 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.060179 kubelet[2684]: W1101 02:41:29.059670 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.060179 kubelet[2684]: E1101 02:41:29.059706 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.060614 kubelet[2684]: E1101 02:41:29.060382 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.060614 kubelet[2684]: W1101 02:41:29.060401 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.060614 kubelet[2684]: E1101 02:41:29.060418 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.062015 kubelet[2684]: E1101 02:41:29.061536 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.062015 kubelet[2684]: W1101 02:41:29.061556 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.062015 kubelet[2684]: E1101 02:41:29.061571 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.062676 kubelet[2684]: E1101 02:41:29.062320 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.062676 kubelet[2684]: W1101 02:41:29.062335 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.062676 kubelet[2684]: E1101 02:41:29.062350 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.063617 kubelet[2684]: E1101 02:41:29.063267 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.063617 kubelet[2684]: W1101 02:41:29.063286 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.063617 kubelet[2684]: E1101 02:41:29.063302 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.064172 kubelet[2684]: E1101 02:41:29.064116 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.065559 kubelet[2684]: W1101 02:41:29.064263 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.065559 kubelet[2684]: E1101 02:41:29.064283 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.065559 kubelet[2684]: E1101 02:41:29.064660 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.065559 kubelet[2684]: W1101 02:41:29.064674 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.065559 kubelet[2684]: E1101 02:41:29.064715 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.066375 kubelet[2684]: E1101 02:41:29.066201 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.066375 kubelet[2684]: W1101 02:41:29.066221 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.066375 kubelet[2684]: E1101 02:41:29.066238 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.067770 kubelet[2684]: E1101 02:41:29.067595 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.067770 kubelet[2684]: W1101 02:41:29.067614 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.067770 kubelet[2684]: E1101 02:41:29.067631 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.068045 kubelet[2684]: E1101 02:41:29.068025 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.068215 kubelet[2684]: W1101 02:41:29.068125 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.068215 kubelet[2684]: E1101 02:41:29.068152 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.068854 kubelet[2684]: E1101 02:41:29.068609 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.068854 kubelet[2684]: W1101 02:41:29.068627 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.068854 kubelet[2684]: E1101 02:41:29.068643 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.144272 kubelet[2684]: E1101 02:41:29.144222 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.144272 kubelet[2684]: W1101 02:41:29.144271 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.144912 kubelet[2684]: E1101 02:41:29.144300 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.144912 kubelet[2684]: E1101 02:41:29.144725 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.144912 kubelet[2684]: W1101 02:41:29.144760 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.144912 kubelet[2684]: E1101 02:41:29.144777 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.145399 kubelet[2684]: E1101 02:41:29.145171 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.145399 kubelet[2684]: W1101 02:41:29.145186 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.145399 kubelet[2684]: E1101 02:41:29.145202 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.145668 kubelet[2684]: E1101 02:41:29.145581 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.145668 kubelet[2684]: W1101 02:41:29.145601 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.146788 kubelet[2684]: E1101 02:41:29.145706 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.147224 kubelet[2684]: E1101 02:41:29.147055 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.147224 kubelet[2684]: W1101 02:41:29.147079 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.147224 kubelet[2684]: E1101 02:41:29.147098 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.147584 kubelet[2684]: E1101 02:41:29.147516 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.147584 kubelet[2684]: W1101 02:41:29.147534 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.147584 kubelet[2684]: E1101 02:41:29.147561 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.148792 kubelet[2684]: E1101 02:41:29.148539 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.148792 kubelet[2684]: W1101 02:41:29.148559 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.148792 kubelet[2684]: E1101 02:41:29.148575 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.149482 kubelet[2684]: E1101 02:41:29.149057 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.149482 kubelet[2684]: W1101 02:41:29.149075 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.149482 kubelet[2684]: E1101 02:41:29.149092 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.150234 kubelet[2684]: E1101 02:41:29.150108 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.150234 kubelet[2684]: W1101 02:41:29.150128 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.150234 kubelet[2684]: E1101 02:41:29.150145 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.151101 kubelet[2684]: E1101 02:41:29.150952 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.151101 kubelet[2684]: W1101 02:41:29.150971 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.151101 kubelet[2684]: E1101 02:41:29.150988 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.152167 kubelet[2684]: E1101 02:41:29.151852 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.152167 kubelet[2684]: W1101 02:41:29.151871 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.152167 kubelet[2684]: E1101 02:41:29.151887 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.152819 kubelet[2684]: E1101 02:41:29.152540 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.152819 kubelet[2684]: W1101 02:41:29.152561 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.152819 kubelet[2684]: E1101 02:41:29.152577 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.153677 kubelet[2684]: E1101 02:41:29.153563 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.153677 kubelet[2684]: W1101 02:41:29.153590 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.153677 kubelet[2684]: E1101 02:41:29.153606 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.154518 kubelet[2684]: E1101 02:41:29.154269 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.154518 kubelet[2684]: W1101 02:41:29.154288 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.154518 kubelet[2684]: E1101 02:41:29.154305 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.155790 kubelet[2684]: E1101 02:41:29.155554 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.155790 kubelet[2684]: W1101 02:41:29.155574 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.155790 kubelet[2684]: E1101 02:41:29.155591 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.156426 kubelet[2684]: E1101 02:41:29.155974 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.156426 kubelet[2684]: W1101 02:41:29.155991 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.156426 kubelet[2684]: E1101 02:41:29.156008 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.157467 kubelet[2684]: E1101 02:41:29.157229 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.157467 kubelet[2684]: W1101 02:41:29.157250 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.157467 kubelet[2684]: E1101 02:41:29.157267 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.157794 kubelet[2684]: E1101 02:41:29.157725 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:29.157794 kubelet[2684]: W1101 02:41:29.157743 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:29.157794 kubelet[2684]: E1101 02:41:29.157760 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:29.856598 kubelet[2684]: E1101 02:41:29.856070 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:30.057246 kubelet[2684]: I1101 02:41:30.057179 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 02:41:30.078825 kubelet[2684]: E1101 02:41:30.078778 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.078825 kubelet[2684]: W1101 02:41:30.078819 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.079765 kubelet[2684]: E1101 02:41:30.078850 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.079765 kubelet[2684]: E1101 02:41:30.079150 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.079765 kubelet[2684]: W1101 02:41:30.079164 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.079765 kubelet[2684]: E1101 02:41:30.079179 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.079765 kubelet[2684]: E1101 02:41:30.079659 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.079765 kubelet[2684]: W1101 02:41:30.079706 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.079765 kubelet[2684]: E1101 02:41:30.079722 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.080786 kubelet[2684]: E1101 02:41:30.080764 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.080862 kubelet[2684]: W1101 02:41:30.080786 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.080862 kubelet[2684]: E1101 02:41:30.080803 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.081126 kubelet[2684]: E1101 02:41:30.081105 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.081201 kubelet[2684]: W1101 02:41:30.081127 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.081201 kubelet[2684]: E1101 02:41:30.081143 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.081733 kubelet[2684]: E1101 02:41:30.081708 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.081733 kubelet[2684]: W1101 02:41:30.081729 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.081868 kubelet[2684]: E1101 02:41:30.081746 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.082140 kubelet[2684]: E1101 02:41:30.082114 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.082140 kubelet[2684]: W1101 02:41:30.082136 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.082262 kubelet[2684]: E1101 02:41:30.082152 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.082821 kubelet[2684]: E1101 02:41:30.082796 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.082821 kubelet[2684]: W1101 02:41:30.082818 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.082958 kubelet[2684]: E1101 02:41:30.082834 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.083273 kubelet[2684]: E1101 02:41:30.083248 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.083273 kubelet[2684]: W1101 02:41:30.083269 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.083395 kubelet[2684]: E1101 02:41:30.083286 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.083829 kubelet[2684]: E1101 02:41:30.083778 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.083829 kubelet[2684]: W1101 02:41:30.083816 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.083953 kubelet[2684]: E1101 02:41:30.083835 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.085542 kubelet[2684]: E1101 02:41:30.084473 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.085542 kubelet[2684]: W1101 02:41:30.084501 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.085542 kubelet[2684]: E1101 02:41:30.084533 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.086185 kubelet[2684]: E1101 02:41:30.086164 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.086430 kubelet[2684]: W1101 02:41:30.086308 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.086430 kubelet[2684]: E1101 02:41:30.086335 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.086989 kubelet[2684]: E1101 02:41:30.086969 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.087203 kubelet[2684]: W1101 02:41:30.087129 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.087938 kubelet[2684]: E1101 02:41:30.087365 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.088137 kubelet[2684]: E1101 02:41:30.088117 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.088324 kubelet[2684]: W1101 02:41:30.088228 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.088324 kubelet[2684]: E1101 02:41:30.088254 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.088825 kubelet[2684]: E1101 02:41:30.088804 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.089082 kubelet[2684]: W1101 02:41:30.088928 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.089082 kubelet[2684]: E1101 02:41:30.088974 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.156523 kubelet[2684]: E1101 02:41:30.156161 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.157621 kubelet[2684]: W1101 02:41:30.157499 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.157621 kubelet[2684]: E1101 02:41:30.157550 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.158315 kubelet[2684]: E1101 02:41:30.158134 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.158315 kubelet[2684]: W1101 02:41:30.158157 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.158315 kubelet[2684]: E1101 02:41:30.158174 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.159870 kubelet[2684]: E1101 02:41:30.158890 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.159870 kubelet[2684]: W1101 02:41:30.158915 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.159870 kubelet[2684]: E1101 02:41:30.159041 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.159870 kubelet[2684]: E1101 02:41:30.159805 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.159870 kubelet[2684]: W1101 02:41:30.159821 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.159870 kubelet[2684]: E1101 02:41:30.159837 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.160771 kubelet[2684]: E1101 02:41:30.160351 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.160771 kubelet[2684]: W1101 02:41:30.160367 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.160771 kubelet[2684]: E1101 02:41:30.160497 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.161459 kubelet[2684]: E1101 02:41:30.161345 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.161459 kubelet[2684]: W1101 02:41:30.161359 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.161459 kubelet[2684]: E1101 02:41:30.161375 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.162689 kubelet[2684]: E1101 02:41:30.162649 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.162689 kubelet[2684]: W1101 02:41:30.162679 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.162826 kubelet[2684]: E1101 02:41:30.162696 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.163086 kubelet[2684]: E1101 02:41:30.162992 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.163086 kubelet[2684]: W1101 02:41:30.163013 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.163086 kubelet[2684]: E1101 02:41:30.163029 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.163624 kubelet[2684]: E1101 02:41:30.163366 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.163624 kubelet[2684]: W1101 02:41:30.163381 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.163624 kubelet[2684]: E1101 02:41:30.163395 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.163941 kubelet[2684]: E1101 02:41:30.163916 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.163941 kubelet[2684]: W1101 02:41:30.163938 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.164062 kubelet[2684]: E1101 02:41:30.163955 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.165173 kubelet[2684]: E1101 02:41:30.164424 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.165173 kubelet[2684]: W1101 02:41:30.164458 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.165173 kubelet[2684]: E1101 02:41:30.164495 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.165173 kubelet[2684]: E1101 02:41:30.165094 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.165173 kubelet[2684]: W1101 02:41:30.165108 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.165173 kubelet[2684]: E1101 02:41:30.165125 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.165531 kubelet[2684]: E1101 02:41:30.165408 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.165531 kubelet[2684]: W1101 02:41:30.165422 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.165531 kubelet[2684]: E1101 02:41:30.165436 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.165870 kubelet[2684]: E1101 02:41:30.165710 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.166095 kubelet[2684]: W1101 02:41:30.166057 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.166095 kubelet[2684]: E1101 02:41:30.166091 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.166927 kubelet[2684]: E1101 02:41:30.166507 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.166927 kubelet[2684]: W1101 02:41:30.166527 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.166927 kubelet[2684]: E1101 02:41:30.166543 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.167897 kubelet[2684]: E1101 02:41:30.167548 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.167897 kubelet[2684]: W1101 02:41:30.167569 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.167897 kubelet[2684]: E1101 02:41:30.167589 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.168696 kubelet[2684]: E1101 02:41:30.168636 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.168858 kubelet[2684]: W1101 02:41:30.168796 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.168858 kubelet[2684]: E1101 02:41:30.168822 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.169546 kubelet[2684]: E1101 02:41:30.169526 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 02:41:30.169546 kubelet[2684]: W1101 02:41:30.169546 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 02:41:30.169752 kubelet[2684]: E1101 02:41:30.169564 2684 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 02:41:30.195760 containerd[1506]: time="2025-11-01T02:41:30.194918599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:30.197352 containerd[1506]: time="2025-11-01T02:41:30.197159117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 02:41:30.198468 containerd[1506]: time="2025-11-01T02:41:30.198157521Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:30.201039 containerd[1506]: time="2025-11-01T02:41:30.200979488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:30.202845 containerd[1506]: time="2025-11-01T02:41:30.202145369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.706936442s" Nov 1 02:41:30.202845 containerd[1506]: time="2025-11-01T02:41:30.202200100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 02:41:30.210835 containerd[1506]: time="2025-11-01T02:41:30.210784730Z" level=info msg="CreateContainer within sandbox \"8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 02:41:30.233258 containerd[1506]: time="2025-11-01T02:41:30.233207982Z" level=info msg="CreateContainer within sandbox \"8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013\"" Nov 1 02:41:30.236009 containerd[1506]: time="2025-11-01T02:41:30.235976989Z" level=info msg="StartContainer for \"6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013\"" Nov 1 02:41:30.307681 systemd[1]: Started cri-containerd-6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013.scope - libcontainer container 6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013. Nov 1 02:41:30.366506 containerd[1506]: time="2025-11-01T02:41:30.366391922Z" level=info msg="StartContainer for \"6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013\" returns successfully" Nov 1 02:41:30.389980 systemd[1]: cri-containerd-6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013.scope: Deactivated successfully. Nov 1 02:41:30.424593 containerd[1506]: time="2025-11-01T02:41:30.424382839Z" level=info msg="shim disconnected" id=6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013 namespace=k8s.io Nov 1 02:41:30.424593 containerd[1506]: time="2025-11-01T02:41:30.424517974Z" level=warning msg="cleaning up after shim disconnected" id=6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013 namespace=k8s.io Nov 1 02:41:30.424593 containerd[1506]: time="2025-11-01T02:41:30.424538814Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 02:41:30.518038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ef28ec64b9ae7bcbd1e49607d06edccf802b5c3694c979031637271849e4013-rootfs.mount: Deactivated successfully. Nov 1 02:41:31.051311 containerd[1506]: time="2025-11-01T02:41:31.051242831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 02:41:31.074812 kubelet[2684]: I1101 02:41:31.074634 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c7cc68d44-flk2q" podStartSLOduration=3.588209305 podStartE2EDuration="7.074591012s" podCreationTimestamp="2025-11-01 02:41:24 +0000 UTC" firstStartedPulling="2025-11-01 02:41:25.007936131 +0000 UTC m=+27.455706348" lastFinishedPulling="2025-11-01 02:41:28.494317837 +0000 UTC m=+30.942088055" observedRunningTime="2025-11-01 02:41:29.120429648 +0000 UTC m=+31.568199899" watchObservedRunningTime="2025-11-01 02:41:31.074591012 +0000 UTC m=+33.522361241" Nov 1 02:41:31.860505 kubelet[2684]: E1101 02:41:31.860358 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:33.858274 kubelet[2684]: E1101 02:41:33.858200 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:35.815972 containerd[1506]: time="2025-11-01T02:41:35.815850154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:35.817720 containerd[1506]: time="2025-11-01T02:41:35.817404985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 02:41:35.819049 containerd[1506]: time="2025-11-01T02:41:35.818517171Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:35.821926 containerd[1506]: time="2025-11-01T02:41:35.821844371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:35.823315 containerd[1506]: time="2025-11-01T02:41:35.823243428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.771237108s" Nov 1 02:41:35.823667 containerd[1506]: time="2025-11-01T02:41:35.823427772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 02:41:35.831557 containerd[1506]: time="2025-11-01T02:41:35.831506712Z" level=info msg="CreateContainer within sandbox \"8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 02:41:35.856837 kubelet[2684]: E1101 02:41:35.856788 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:35.869024 containerd[1506]: time="2025-11-01T02:41:35.868974764Z" level=info msg="CreateContainer within sandbox \"8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494\"" Nov 1 02:41:35.872918 containerd[1506]: time="2025-11-01T02:41:35.870859747Z" level=info msg="StartContainer for \"68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494\"" Nov 1 02:41:35.942881 systemd[1]: Started cri-containerd-68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494.scope - libcontainer container 68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494. Nov 1 02:41:35.997272 containerd[1506]: time="2025-11-01T02:41:35.996897608Z" level=info msg="StartContainer for \"68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494\" returns successfully" Nov 1 02:41:36.925327 systemd[1]: cri-containerd-68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494.scope: Deactivated successfully. Nov 1 02:41:36.987372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494-rootfs.mount: Deactivated successfully. Nov 1 02:41:37.016250 kubelet[2684]: I1101 02:41:37.016170 2684 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 02:41:37.124435 containerd[1506]: time="2025-11-01T02:41:37.123959723Z" level=info msg="shim disconnected" id=68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494 namespace=k8s.io Nov 1 02:41:37.124435 containerd[1506]: time="2025-11-01T02:41:37.124103887Z" level=warning msg="cleaning up after shim disconnected" id=68a1f40da3bbc7a73550957af07a4b6d854070c78291c3207362fe274c85d494 namespace=k8s.io Nov 1 02:41:37.124435 containerd[1506]: time="2025-11-01T02:41:37.124132307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 02:41:37.200999 systemd[1]: Created slice kubepods-burstable-pod2f47046c_5cfe_45c6_8991_37f59ad744e0.slice - libcontainer container kubepods-burstable-pod2f47046c_5cfe_45c6_8991_37f59ad744e0.slice. Nov 1 02:41:37.219083 systemd[1]: Created slice kubepods-besteffort-podefbd1db3_4d1b_4800_b03d_ce570a8bfb0d.slice - libcontainer container kubepods-besteffort-podefbd1db3_4d1b_4800_b03d_ce570a8bfb0d.slice. Nov 1 02:41:37.234194 systemd[1]: Created slice kubepods-besteffort-poda1c04a34_b552_49b3_a9dc_198853e53df9.slice - libcontainer container kubepods-besteffort-poda1c04a34_b552_49b3_a9dc_198853e53df9.slice. Nov 1 02:41:37.247387 systemd[1]: Created slice kubepods-besteffort-podb426dedc_f58e_4d16_a987_3056f24fa4d7.slice - libcontainer container kubepods-besteffort-podb426dedc_f58e_4d16_a987_3056f24fa4d7.slice. Nov 1 02:41:37.261208 systemd[1]: Created slice kubepods-burstable-pod7a9a07dc_fa9f_46a5_a187_3cb9d24e0c6b.slice - libcontainer container kubepods-burstable-pod7a9a07dc_fa9f_46a5_a187_3cb9d24e0c6b.slice. Nov 1 02:41:37.283973 systemd[1]: Created slice kubepods-besteffort-pode1215547_a56a_4c57_957b_4ea0376bfb33.slice - libcontainer container kubepods-besteffort-pode1215547_a56a_4c57_957b_4ea0376bfb33.slice. Nov 1 02:41:37.297425 systemd[1]: Created slice kubepods-besteffort-pod552d1b77_edb7_4e78_b15b_ddf34ab43f14.slice - libcontainer container kubepods-besteffort-pod552d1b77_edb7_4e78_b15b_ddf34ab43f14.slice. Nov 1 02:41:37.312587 kubelet[2684]: I1101 02:41:37.312535 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f47046c-5cfe-45c6-8991-37f59ad744e0-config-volume\") pod \"coredns-66bc5c9577-kzkkp\" (UID: \"2f47046c-5cfe-45c6-8991-37f59ad744e0\") " pod="kube-system/coredns-66bc5c9577-kzkkp" Nov 1 02:41:37.313076 kubelet[2684]: I1101 02:41:37.312946 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-backend-key-pair\") pod \"whisker-6bd9f556bc-xfhqg\" (UID: \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\") " pod="calico-system/whisker-6bd9f556bc-xfhqg" Nov 1 02:41:37.313076 kubelet[2684]: I1101 02:41:37.313045 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-ca-bundle\") pod \"whisker-6bd9f556bc-xfhqg\" (UID: \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\") " pod="calico-system/whisker-6bd9f556bc-xfhqg" Nov 1 02:41:37.313490 kubelet[2684]: I1101 02:41:37.313322 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghjg\" (UniqueName: \"kubernetes.io/projected/efbd1db3-4d1b-4800-b03d-ce570a8bfb0d-kube-api-access-8ghjg\") pod \"calico-kube-controllers-754cd6d684-rxmrt\" (UID: \"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d\") " pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" Nov 1 02:41:37.313490 kubelet[2684]: I1101 02:41:37.313413 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgw8x\" (UniqueName: \"kubernetes.io/projected/552d1b77-edb7-4e78-b15b-ddf34ab43f14-kube-api-access-kgw8x\") pod \"whisker-6bd9f556bc-xfhqg\" (UID: \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\") " pod="calico-system/whisker-6bd9f556bc-xfhqg" Nov 1 02:41:37.313828 kubelet[2684]: I1101 02:41:37.313675 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b-config-volume\") pod \"coredns-66bc5c9577-9qqz8\" (UID: \"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b\") " pod="kube-system/coredns-66bc5c9577-9qqz8" Nov 1 02:41:37.314487 kubelet[2684]: I1101 02:41:37.313720 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nvq\" (UniqueName: \"kubernetes.io/projected/2f47046c-5cfe-45c6-8991-37f59ad744e0-kube-api-access-z7nvq\") pod \"coredns-66bc5c9577-kzkkp\" (UID: \"2f47046c-5cfe-45c6-8991-37f59ad744e0\") " pod="kube-system/coredns-66bc5c9577-kzkkp" Nov 1 02:41:37.314487 kubelet[2684]: I1101 02:41:37.314305 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shrb6\" (UniqueName: \"kubernetes.io/projected/a1c04a34-b552-49b3-a9dc-198853e53df9-kube-api-access-shrb6\") pod \"calico-apiserver-6f8dc58755-bvnkc\" (UID: \"a1c04a34-b552-49b3-a9dc-198853e53df9\") " pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" Nov 1 02:41:37.314487 kubelet[2684]: I1101 02:41:37.314340 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e1215547-a56a-4c57-957b-4ea0376bfb33-goldmane-key-pair\") pod \"goldmane-7c778bb748-t726m\" (UID: \"e1215547-a56a-4c57-957b-4ea0376bfb33\") " pod="calico-system/goldmane-7c778bb748-t726m" Nov 1 02:41:37.314487 kubelet[2684]: I1101 02:41:37.314374 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgnr9\" (UniqueName: \"kubernetes.io/projected/e1215547-a56a-4c57-957b-4ea0376bfb33-kube-api-access-fgnr9\") pod \"goldmane-7c778bb748-t726m\" (UID: \"e1215547-a56a-4c57-957b-4ea0376bfb33\") " pod="calico-system/goldmane-7c778bb748-t726m" Nov 1 02:41:37.314487 kubelet[2684]: I1101 02:41:37.314403 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql85h\" (UniqueName: \"kubernetes.io/projected/7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b-kube-api-access-ql85h\") pod \"coredns-66bc5c9577-9qqz8\" (UID: \"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b\") " pod="kube-system/coredns-66bc5c9577-9qqz8" Nov 1 02:41:37.314833 kubelet[2684]: I1101 02:41:37.314441 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1215547-a56a-4c57-957b-4ea0376bfb33-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-t726m\" (UID: \"e1215547-a56a-4c57-957b-4ea0376bfb33\") " pod="calico-system/goldmane-7c778bb748-t726m" Nov 1 02:41:37.314833 kubelet[2684]: I1101 02:41:37.314500 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1c04a34-b552-49b3-a9dc-198853e53df9-calico-apiserver-certs\") pod \"calico-apiserver-6f8dc58755-bvnkc\" (UID: \"a1c04a34-b552-49b3-a9dc-198853e53df9\") " pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" Nov 1 02:41:37.314833 kubelet[2684]: I1101 02:41:37.314532 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1215547-a56a-4c57-957b-4ea0376bfb33-config\") pod \"goldmane-7c778bb748-t726m\" (UID: \"e1215547-a56a-4c57-957b-4ea0376bfb33\") " pod="calico-system/goldmane-7c778bb748-t726m" Nov 1 02:41:37.314833 kubelet[2684]: I1101 02:41:37.314573 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b426dedc-f58e-4d16-a987-3056f24fa4d7-calico-apiserver-certs\") pod \"calico-apiserver-6f8dc58755-gnqxq\" (UID: \"b426dedc-f58e-4d16-a987-3056f24fa4d7\") " pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" Nov 1 02:41:37.314833 kubelet[2684]: I1101 02:41:37.314597 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq4mb\" (UniqueName: \"kubernetes.io/projected/b426dedc-f58e-4d16-a987-3056f24fa4d7-kube-api-access-xq4mb\") pod \"calico-apiserver-6f8dc58755-gnqxq\" (UID: \"b426dedc-f58e-4d16-a987-3056f24fa4d7\") " pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" Nov 1 02:41:37.316141 kubelet[2684]: I1101 02:41:37.314653 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efbd1db3-4d1b-4800-b03d-ce570a8bfb0d-tigera-ca-bundle\") pod \"calico-kube-controllers-754cd6d684-rxmrt\" (UID: \"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d\") " pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" Nov 1 02:41:37.513810 containerd[1506]: time="2025-11-01T02:41:37.513636160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzkkp,Uid:2f47046c-5cfe-45c6-8991-37f59ad744e0,Namespace:kube-system,Attempt:0,}" Nov 1 02:41:37.529037 containerd[1506]: time="2025-11-01T02:41:37.528835175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cd6d684-rxmrt,Uid:efbd1db3-4d1b-4800-b03d-ce570a8bfb0d,Namespace:calico-system,Attempt:0,}" Nov 1 02:41:37.543023 containerd[1506]: time="2025-11-01T02:41:37.542987938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-bvnkc,Uid:a1c04a34-b552-49b3-a9dc-198853e53df9,Namespace:calico-apiserver,Attempt:0,}" Nov 1 02:41:37.563007 containerd[1506]: time="2025-11-01T02:41:37.562415480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-gnqxq,Uid:b426dedc-f58e-4d16-a987-3056f24fa4d7,Namespace:calico-apiserver,Attempt:0,}" Nov 1 02:41:37.587453 containerd[1506]: time="2025-11-01T02:41:37.587378714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qqz8,Uid:7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b,Namespace:kube-system,Attempt:0,}" Nov 1 02:41:37.601793 containerd[1506]: time="2025-11-01T02:41:37.601739741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t726m,Uid:e1215547-a56a-4c57-957b-4ea0376bfb33,Namespace:calico-system,Attempt:0,}" Nov 1 02:41:37.615385 containerd[1506]: time="2025-11-01T02:41:37.614735669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bd9f556bc-xfhqg,Uid:552d1b77-edb7-4e78-b15b-ddf34ab43f14,Namespace:calico-system,Attempt:0,}" Nov 1 02:41:37.870072 systemd[1]: Created slice kubepods-besteffort-pod2db811c5_1134_445e_9e39_ac0e7ee1b427.slice - libcontainer container kubepods-besteffort-pod2db811c5_1134_445e_9e39_ac0e7ee1b427.slice. Nov 1 02:41:37.880940 containerd[1506]: time="2025-11-01T02:41:37.880891144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvm6v,Uid:2db811c5-1134-445e-9e39-ac0e7ee1b427,Namespace:calico-system,Attempt:0,}" Nov 1 02:41:38.043050 containerd[1506]: time="2025-11-01T02:41:38.042981504Z" level=error msg="Failed to destroy network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.046548 containerd[1506]: time="2025-11-01T02:41:38.045757252Z" level=error msg="Failed to destroy network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.049295 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149-shm.mount: Deactivated successfully. Nov 1 02:41:38.057883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67-shm.mount: Deactivated successfully. Nov 1 02:41:38.066196 containerd[1506]: time="2025-11-01T02:41:38.065778277Z" level=error msg="encountered an error cleaning up failed sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.066196 containerd[1506]: time="2025-11-01T02:41:38.065911989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bd9f556bc-xfhqg,Uid:552d1b77-edb7-4e78-b15b-ddf34ab43f14,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.066196 containerd[1506]: time="2025-11-01T02:41:38.066101131Z" level=error msg="encountered an error cleaning up failed sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.066878 containerd[1506]: time="2025-11-01T02:41:38.066204802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzkkp,Uid:2f47046c-5cfe-45c6-8991-37f59ad744e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.070622 kubelet[2684]: E1101 02:41:38.069762 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.070622 kubelet[2684]: E1101 02:41:38.070012 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kzkkp" Nov 1 02:41:38.070622 kubelet[2684]: E1101 02:41:38.070072 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kzkkp" Nov 1 02:41:38.071292 kubelet[2684]: E1101 02:41:38.070223 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kzkkp_kube-system(2f47046c-5cfe-45c6-8991-37f59ad744e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kzkkp_kube-system(2f47046c-5cfe-45c6-8991-37f59ad744e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kzkkp" podUID="2f47046c-5cfe-45c6-8991-37f59ad744e0" Nov 1 02:41:38.073160 kubelet[2684]: E1101 02:41:38.071747 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.073160 kubelet[2684]: E1101 02:41:38.071874 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bd9f556bc-xfhqg" Nov 1 02:41:38.073160 kubelet[2684]: E1101 02:41:38.071903 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bd9f556bc-xfhqg" Nov 1 02:41:38.075562 kubelet[2684]: E1101 02:41:38.071974 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bd9f556bc-xfhqg_calico-system(552d1b77-edb7-4e78-b15b-ddf34ab43f14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bd9f556bc-xfhqg_calico-system(552d1b77-edb7-4e78-b15b-ddf34ab43f14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bd9f556bc-xfhqg" podUID="552d1b77-edb7-4e78-b15b-ddf34ab43f14" Nov 1 02:41:38.084251 kubelet[2684]: I1101 02:41:38.082589 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:38.087131 kubelet[2684]: I1101 02:41:38.087104 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:38.096238 containerd[1506]: time="2025-11-01T02:41:38.096146435Z" level=info msg="StopPodSandbox for \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\"" Nov 1 02:41:38.102774 containerd[1506]: time="2025-11-01T02:41:38.098162527Z" level=info msg="StopPodSandbox for \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\"" Nov 1 02:41:38.103177 containerd[1506]: time="2025-11-01T02:41:38.103143642Z" level=info msg="Ensure that sandbox b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67 in task-service has been cleanup successfully" Nov 1 02:41:38.109097 containerd[1506]: time="2025-11-01T02:41:38.109053212Z" level=info msg="Ensure that sandbox b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149 in task-service has been cleanup successfully" Nov 1 02:41:38.169056 containerd[1506]: time="2025-11-01T02:41:38.168880563Z" level=error msg="Failed to destroy network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.173842 containerd[1506]: time="2025-11-01T02:41:38.172058377Z" level=error msg="encountered an error cleaning up failed sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.177396 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23-shm.mount: Deactivated successfully. Nov 1 02:41:38.187239 containerd[1506]: time="2025-11-01T02:41:38.187123122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-bvnkc,Uid:a1c04a34-b552-49b3-a9dc-198853e53df9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.188183 kubelet[2684]: E1101 02:41:38.187619 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.188183 kubelet[2684]: E1101 02:41:38.187692 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" Nov 1 02:41:38.188183 kubelet[2684]: E1101 02:41:38.187723 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" Nov 1 02:41:38.190313 kubelet[2684]: E1101 02:41:38.190155 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8dc58755-bvnkc_calico-apiserver(a1c04a34-b552-49b3-a9dc-198853e53df9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8dc58755-bvnkc_calico-apiserver(a1c04a34-b552-49b3-a9dc-198853e53df9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:41:38.196265 containerd[1506]: time="2025-11-01T02:41:38.196211271Z" level=error msg="Failed to destroy network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.203264 containerd[1506]: time="2025-11-01T02:41:38.201242955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 02:41:38.203264 containerd[1506]: time="2025-11-01T02:41:38.201393190Z" level=error msg="Failed to destroy network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.203264 containerd[1506]: time="2025-11-01T02:41:38.201979570Z" level=error msg="encountered an error cleaning up failed sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.203264 containerd[1506]: time="2025-11-01T02:41:38.202041346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qqz8,Uid:7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.203264 containerd[1506]: time="2025-11-01T02:41:38.202275938Z" level=error msg="encountered an error cleaning up failed sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.203264 containerd[1506]: time="2025-11-01T02:41:38.202324163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-gnqxq,Uid:b426dedc-f58e-4d16-a987-3056f24fa4d7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.203116 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38-shm.mount: Deactivated successfully. Nov 1 02:41:38.207685 kubelet[2684]: E1101 02:41:38.204051 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.207685 kubelet[2684]: E1101 02:41:38.204114 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" Nov 1 02:41:38.207685 kubelet[2684]: E1101 02:41:38.204173 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" Nov 1 02:41:38.207897 kubelet[2684]: E1101 02:41:38.204240 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f8dc58755-gnqxq_calico-apiserver(b426dedc-f58e-4d16-a987-3056f24fa4d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f8dc58755-gnqxq_calico-apiserver(b426dedc-f58e-4d16-a987-3056f24fa4d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:41:38.207897 kubelet[2684]: E1101 02:41:38.205506 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.207897 kubelet[2684]: E1101 02:41:38.205547 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9qqz8" Nov 1 02:41:38.208089 kubelet[2684]: E1101 02:41:38.205576 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9qqz8" Nov 1 02:41:38.208089 kubelet[2684]: E1101 02:41:38.205639 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9qqz8_kube-system(7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9qqz8_kube-system(7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9qqz8" podUID="7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b" Nov 1 02:41:38.210522 containerd[1506]: time="2025-11-01T02:41:38.210474107Z" level=error msg="Failed to destroy network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.211860 containerd[1506]: time="2025-11-01T02:41:38.211570645Z" level=error msg="encountered an error cleaning up failed sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.212171 containerd[1506]: time="2025-11-01T02:41:38.212058613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cd6d684-rxmrt,Uid:efbd1db3-4d1b-4800-b03d-ce570a8bfb0d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.217912 kubelet[2684]: E1101 02:41:38.217127 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.217912 kubelet[2684]: E1101 02:41:38.217194 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" Nov 1 02:41:38.217912 kubelet[2684]: E1101 02:41:38.217222 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" Nov 1 02:41:38.219014 kubelet[2684]: E1101 02:41:38.217289 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-754cd6d684-rxmrt_calico-system(efbd1db3-4d1b-4800-b03d-ce570a8bfb0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-754cd6d684-rxmrt_calico-system(efbd1db3-4d1b-4800-b03d-ce570a8bfb0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:41:38.223671 containerd[1506]: time="2025-11-01T02:41:38.222997462Z" level=error msg="Failed to destroy network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.227893 containerd[1506]: time="2025-11-01T02:41:38.227383922Z" level=error msg="encountered an error cleaning up failed sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.227893 containerd[1506]: time="2025-11-01T02:41:38.227482595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t726m,Uid:e1215547-a56a-4c57-957b-4ea0376bfb33,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.233898 kubelet[2684]: E1101 02:41:38.233513 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.233898 kubelet[2684]: E1101 02:41:38.233683 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t726m" Nov 1 02:41:38.233898 kubelet[2684]: E1101 02:41:38.233842 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-t726m" Nov 1 02:41:38.235225 kubelet[2684]: E1101 02:41:38.234866 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-t726m_calico-system(e1215547-a56a-4c57-957b-4ea0376bfb33)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-t726m_calico-system(e1215547-a56a-4c57-957b-4ea0376bfb33)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:41:38.301114 containerd[1506]: time="2025-11-01T02:41:38.301048629Z" level=error msg="Failed to destroy network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.301752 containerd[1506]: time="2025-11-01T02:41:38.301546461Z" level=error msg="encountered an error cleaning up failed sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.301752 containerd[1506]: time="2025-11-01T02:41:38.301706731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvm6v,Uid:2db811c5-1134-445e-9e39-ac0e7ee1b427,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.302164 kubelet[2684]: E1101 02:41:38.302076 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.302504 kubelet[2684]: E1101 02:41:38.302337 2684 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gvm6v" Nov 1 02:41:38.302504 kubelet[2684]: E1101 02:41:38.302376 2684 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gvm6v" Nov 1 02:41:38.304389 kubelet[2684]: E1101 02:41:38.302781 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:38.321146 containerd[1506]: time="2025-11-01T02:41:38.321003189Z" level=error msg="StopPodSandbox for \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\" failed" error="failed to destroy network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.322065 kubelet[2684]: E1101 02:41:38.321578 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:38.322065 kubelet[2684]: E1101 02:41:38.321668 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149"} Nov 1 02:41:38.322065 kubelet[2684]: E1101 02:41:38.321787 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f47046c-5cfe-45c6-8991-37f59ad744e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:38.322065 kubelet[2684]: E1101 02:41:38.321827 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f47046c-5cfe-45c6-8991-37f59ad744e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kzkkp" podUID="2f47046c-5cfe-45c6-8991-37f59ad744e0" Nov 1 02:41:38.327320 containerd[1506]: time="2025-11-01T02:41:38.327186505Z" level=error msg="StopPodSandbox for \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\" failed" error="failed to destroy network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:38.327661 kubelet[2684]: E1101 02:41:38.327592 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:38.327767 kubelet[2684]: E1101 02:41:38.327666 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67"} Nov 1 02:41:38.327767 kubelet[2684]: E1101 02:41:38.327708 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:38.327917 kubelet[2684]: E1101 02:41:38.327761 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bd9f556bc-xfhqg" podUID="552d1b77-edb7-4e78-b15b-ddf34ab43f14" Nov 1 02:41:38.987267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7-shm.mount: Deactivated successfully. Nov 1 02:41:38.987425 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c-shm.mount: Deactivated successfully. Nov 1 02:41:38.987558 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015-shm.mount: Deactivated successfully. Nov 1 02:41:38.987687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732-shm.mount: Deactivated successfully. Nov 1 02:41:39.117990 kubelet[2684]: I1101 02:41:39.117949 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:39.119714 containerd[1506]: time="2025-11-01T02:41:39.118989815Z" level=info msg="StopPodSandbox for \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\"" Nov 1 02:41:39.119714 containerd[1506]: time="2025-11-01T02:41:39.119251343Z" level=info msg="Ensure that sandbox 2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7 in task-service has been cleanup successfully" Nov 1 02:41:39.122480 kubelet[2684]: I1101 02:41:39.122080 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:39.124816 containerd[1506]: time="2025-11-01T02:41:39.124251585Z" level=info msg="StopPodSandbox for \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\"" Nov 1 02:41:39.124816 containerd[1506]: time="2025-11-01T02:41:39.124572692Z" level=info msg="Ensure that sandbox b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015 in task-service has been cleanup successfully" Nov 1 02:41:39.127781 kubelet[2684]: I1101 02:41:39.126825 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:39.129754 containerd[1506]: time="2025-11-01T02:41:39.128939762Z" level=info msg="StopPodSandbox for \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\"" Nov 1 02:41:39.132078 kubelet[2684]: I1101 02:41:39.131411 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:41:39.134309 containerd[1506]: time="2025-11-01T02:41:39.133590881Z" level=info msg="Ensure that sandbox faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38 in task-service has been cleanup successfully" Nov 1 02:41:39.138947 containerd[1506]: time="2025-11-01T02:41:39.138911590Z" level=info msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\"" Nov 1 02:41:39.143094 containerd[1506]: time="2025-11-01T02:41:39.143053648Z" level=info msg="Ensure that sandbox e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c in task-service has been cleanup successfully" Nov 1 02:41:39.148949 kubelet[2684]: I1101 02:41:39.148912 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:39.157220 containerd[1506]: time="2025-11-01T02:41:39.157174141Z" level=info msg="StopPodSandbox for \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\"" Nov 1 02:41:39.158283 containerd[1506]: time="2025-11-01T02:41:39.158250862Z" level=info msg="Ensure that sandbox 1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23 in task-service has been cleanup successfully" Nov 1 02:41:39.188722 kubelet[2684]: I1101 02:41:39.188672 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:39.191866 containerd[1506]: time="2025-11-01T02:41:39.191771339Z" level=info msg="StopPodSandbox for \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\"" Nov 1 02:41:39.193155 containerd[1506]: time="2025-11-01T02:41:39.193123389Z" level=info msg="Ensure that sandbox 060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732 in task-service has been cleanup successfully" Nov 1 02:41:39.238206 containerd[1506]: time="2025-11-01T02:41:39.238021220Z" level=error msg="StopPodSandbox for \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\" failed" error="failed to destroy network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:39.239735 kubelet[2684]: E1101 02:41:39.239417 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:39.239735 kubelet[2684]: E1101 02:41:39.239529 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7"} Nov 1 02:41:39.239735 kubelet[2684]: E1101 02:41:39.239654 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2db811c5-1134-445e-9e39-ac0e7ee1b427\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:39.239996 kubelet[2684]: E1101 02:41:39.239754 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2db811c5-1134-445e-9e39-ac0e7ee1b427\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:39.247748 containerd[1506]: time="2025-11-01T02:41:39.246957515Z" level=error msg="StopPodSandbox for \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\" failed" error="failed to destroy network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:39.247917 kubelet[2684]: E1101 02:41:39.247236 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:39.247917 kubelet[2684]: E1101 02:41:39.247291 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015"} Nov 1 02:41:39.247917 kubelet[2684]: E1101 02:41:39.247330 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:39.247917 kubelet[2684]: E1101 02:41:39.247367 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9qqz8" podUID="7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b" Nov 1 02:41:39.277354 containerd[1506]: time="2025-11-01T02:41:39.277279373Z" level=error msg="StopPodSandbox for \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\" failed" error="failed to destroy network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:39.278086 kubelet[2684]: E1101 02:41:39.277872 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:39.278086 kubelet[2684]: E1101 02:41:39.277944 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732"} Nov 1 02:41:39.278372 kubelet[2684]: E1101 02:41:39.278258 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:39.278372 kubelet[2684]: E1101 02:41:39.278327 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:41:39.284086 containerd[1506]: time="2025-11-01T02:41:39.283094374Z" level=error msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" failed" error="failed to destroy network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:39.284194 kubelet[2684]: E1101 02:41:39.283285 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:41:39.284194 kubelet[2684]: E1101 02:41:39.283342 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c"} Nov 1 02:41:39.284194 kubelet[2684]: E1101 02:41:39.283375 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1215547-a56a-4c57-957b-4ea0376bfb33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:39.284194 kubelet[2684]: E1101 02:41:39.283407 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1215547-a56a-4c57-957b-4ea0376bfb33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:41:39.286294 containerd[1506]: time="2025-11-01T02:41:39.286223680Z" level=error msg="StopPodSandbox for \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\" failed" error="failed to destroy network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:39.286671 kubelet[2684]: E1101 02:41:39.286496 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:39.286671 kubelet[2684]: E1101 02:41:39.286552 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38"} Nov 1 02:41:39.286671 kubelet[2684]: E1101 02:41:39.286598 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b426dedc-f58e-4d16-a987-3056f24fa4d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:39.286671 kubelet[2684]: E1101 02:41:39.286632 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b426dedc-f58e-4d16-a987-3056f24fa4d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:41:39.290429 containerd[1506]: time="2025-11-01T02:41:39.290352120Z" level=error msg="StopPodSandbox for \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\" failed" error="failed to destroy network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:39.290708 kubelet[2684]: E1101 02:41:39.290648 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:39.290789 kubelet[2684]: E1101 02:41:39.290710 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23"} Nov 1 02:41:39.290789 kubelet[2684]: E1101 02:41:39.290743 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a1c04a34-b552-49b3-a9dc-198853e53df9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:39.290789 kubelet[2684]: E1101 02:41:39.290775 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a1c04a34-b552-49b3-a9dc-198853e53df9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:41:46.808716 kubelet[2684]: I1101 02:41:46.798353 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 02:41:48.456776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104105898.mount: Deactivated successfully. Nov 1 02:41:48.584204 containerd[1506]: time="2025-11-01T02:41:48.583948709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 02:41:48.621721 containerd[1506]: time="2025-11-01T02:41:48.621649655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:48.699723 containerd[1506]: time="2025-11-01T02:41:48.699601398Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:48.703361 containerd[1506]: time="2025-11-01T02:41:48.703175609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 02:41:48.710436 containerd[1506]: time="2025-11-01T02:41:48.709596166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.50012555s" Nov 1 02:41:48.710436 containerd[1506]: time="2025-11-01T02:41:48.709695160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 02:41:48.802363 containerd[1506]: time="2025-11-01T02:41:48.802295702Z" level=info msg="CreateContainer within sandbox \"8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 02:41:48.911361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3637091762.mount: Deactivated successfully. Nov 1 02:41:48.933544 containerd[1506]: time="2025-11-01T02:41:48.932840181Z" level=info msg="CreateContainer within sandbox \"8867cc59ddd937aa40522c9a93622340fff664b24dec1dd927ab0cdceb882f1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b5657786bf80f3a6b16f48f883b26690c4a0e1935c5ca51789d03650cfdfa2fa\"" Nov 1 02:41:48.951473 containerd[1506]: time="2025-11-01T02:41:48.951243348Z" level=info msg="StartContainer for \"b5657786bf80f3a6b16f48f883b26690c4a0e1935c5ca51789d03650cfdfa2fa\"" Nov 1 02:41:49.107719 systemd[1]: Started cri-containerd-b5657786bf80f3a6b16f48f883b26690c4a0e1935c5ca51789d03650cfdfa2fa.scope - libcontainer container b5657786bf80f3a6b16f48f883b26690c4a0e1935c5ca51789d03650cfdfa2fa. Nov 1 02:41:49.258548 containerd[1506]: time="2025-11-01T02:41:49.258498057Z" level=info msg="StartContainer for \"b5657786bf80f3a6b16f48f883b26690c4a0e1935c5ca51789d03650cfdfa2fa\" returns successfully" Nov 1 02:41:49.731764 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 02:41:49.733031 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 02:41:49.863326 containerd[1506]: time="2025-11-01T02:41:49.863233263Z" level=info msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\"" Nov 1 02:41:49.981311 containerd[1506]: time="2025-11-01T02:41:49.981031922Z" level=error msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" failed" error="failed to destroy network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 02:41:49.982593 kubelet[2684]: E1101 02:41:49.982206 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:41:49.982593 kubelet[2684]: E1101 02:41:49.982305 2684 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c"} Nov 1 02:41:49.982593 kubelet[2684]: E1101 02:41:49.982370 2684 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1215547-a56a-4c57-957b-4ea0376bfb33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 02:41:49.983401 kubelet[2684]: E1101 02:41:49.982420 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1215547-a56a-4c57-957b-4ea0376bfb33\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:41:49.985119 containerd[1506]: time="2025-11-01T02:41:49.985065314Z" level=info msg="StopPodSandbox for \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\"" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.132 [INFO][3967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.133 [INFO][3967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" iface="eth0" netns="/var/run/netns/cni-b486b862-155f-9569-bc93-6f9dc6fb2368" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.134 [INFO][3967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" iface="eth0" netns="/var/run/netns/cni-b486b862-155f-9569-bc93-6f9dc6fb2368" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.135 [INFO][3967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" iface="eth0" netns="/var/run/netns/cni-b486b862-155f-9569-bc93-6f9dc6fb2368" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.135 [INFO][3967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.135 [INFO][3967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.385 [INFO][3980] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.392 [INFO][3980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.393 [INFO][3980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.428 [WARNING][3980] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.428 [INFO][3980] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.434 [INFO][3980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:50.441255 containerd[1506]: 2025-11-01 02:41:50.438 [INFO][3967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:50.443357 containerd[1506]: time="2025-11-01T02:41:50.441916465Z" level=info msg="TearDown network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\" successfully" Nov 1 02:41:50.443357 containerd[1506]: time="2025-11-01T02:41:50.441952854Z" level=info msg="StopPodSandbox for \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\" returns successfully" Nov 1 02:41:50.448326 systemd[1]: run-netns-cni\x2db486b862\x2d155f\x2d9569\x2dbc93\x2d6f9dc6fb2368.mount: Deactivated successfully. Nov 1 02:41:50.550956 kubelet[2684]: I1101 02:41:50.550886 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-ca-bundle\") pod \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\" (UID: \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\") " Nov 1 02:41:50.550956 kubelet[2684]: I1101 02:41:50.550949 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgw8x\" (UniqueName: \"kubernetes.io/projected/552d1b77-edb7-4e78-b15b-ddf34ab43f14-kube-api-access-kgw8x\") pod \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\" (UID: \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\") " Nov 1 02:41:50.552279 kubelet[2684]: I1101 02:41:50.550989 2684 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-backend-key-pair\") pod \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\" (UID: \"552d1b77-edb7-4e78-b15b-ddf34ab43f14\") " Nov 1 02:41:50.587692 systemd[1]: var-lib-kubelet-pods-552d1b77\x2dedb7\x2d4e78\x2db15b\x2dddf34ab43f14-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkgw8x.mount: Deactivated successfully. Nov 1 02:41:50.587869 systemd[1]: var-lib-kubelet-pods-552d1b77\x2dedb7\x2d4e78\x2db15b\x2dddf34ab43f14-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 02:41:50.592408 kubelet[2684]: I1101 02:41:50.590497 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "552d1b77-edb7-4e78-b15b-ddf34ab43f14" (UID: "552d1b77-edb7-4e78-b15b-ddf34ab43f14"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 02:41:50.592408 kubelet[2684]: I1101 02:41:50.592346 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552d1b77-edb7-4e78-b15b-ddf34ab43f14-kube-api-access-kgw8x" (OuterVolumeSpecName: "kube-api-access-kgw8x") pod "552d1b77-edb7-4e78-b15b-ddf34ab43f14" (UID: "552d1b77-edb7-4e78-b15b-ddf34ab43f14"). InnerVolumeSpecName "kube-api-access-kgw8x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 02:41:50.595413 kubelet[2684]: I1101 02:41:50.589936 2684 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "552d1b77-edb7-4e78-b15b-ddf34ab43f14" (UID: "552d1b77-edb7-4e78-b15b-ddf34ab43f14"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 02:41:50.651269 kubelet[2684]: I1101 02:41:50.651224 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-ca-bundle\") on node \"srv-liqqm.gb1.brightbox.com\" DevicePath \"\"" Nov 1 02:41:50.651269 kubelet[2684]: I1101 02:41:50.651265 2684 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kgw8x\" (UniqueName: \"kubernetes.io/projected/552d1b77-edb7-4e78-b15b-ddf34ab43f14-kube-api-access-kgw8x\") on node \"srv-liqqm.gb1.brightbox.com\" DevicePath \"\"" Nov 1 02:41:50.651269 kubelet[2684]: I1101 02:41:50.651283 2684 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/552d1b77-edb7-4e78-b15b-ddf34ab43f14-whisker-backend-key-pair\") on node \"srv-liqqm.gb1.brightbox.com\" DevicePath \"\"" Nov 1 02:41:50.857198 containerd[1506]: time="2025-11-01T02:41:50.856651448Z" level=info msg="StopPodSandbox for \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\"" Nov 1 02:41:50.857555 containerd[1506]: time="2025-11-01T02:41:50.857522590Z" level=info msg="StopPodSandbox for \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\"" Nov 1 02:41:50.969622 kubelet[2684]: I1101 02:41:50.962163 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j6r7f" podStartSLOduration=3.608582117 podStartE2EDuration="26.950304007s" podCreationTimestamp="2025-11-01 02:41:24 +0000 UTC" firstStartedPulling="2025-11-01 02:41:25.38856925 +0000 UTC m=+27.836339472" lastFinishedPulling="2025-11-01 02:41:48.730291139 +0000 UTC m=+51.178061362" observedRunningTime="2025-11-01 02:41:50.319029788 +0000 UTC m=+52.766800032" watchObservedRunningTime="2025-11-01 02:41:50.950304007 +0000 UTC m=+53.398074232" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:50.947 [INFO][4038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:50.948 [INFO][4038] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" iface="eth0" netns="/var/run/netns/cni-09d8c9e1-caac-7e6f-dd96-5f1208b1b6cb" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:50.950 [INFO][4038] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" iface="eth0" netns="/var/run/netns/cni-09d8c9e1-caac-7e6f-dd96-5f1208b1b6cb" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:50.956 [INFO][4038] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" iface="eth0" netns="/var/run/netns/cni-09d8c9e1-caac-7e6f-dd96-5f1208b1b6cb" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:50.957 [INFO][4038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:50.957 [INFO][4038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:51.011 [INFO][4054] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:51.011 [INFO][4054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:51.011 [INFO][4054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:51.021 [WARNING][4054] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:51.021 [INFO][4054] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:51.024 [INFO][4054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:51.033828 containerd[1506]: 2025-11-01 02:41:51.031 [INFO][4038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:51.038289 containerd[1506]: time="2025-11-01T02:41:51.035862757Z" level=info msg="TearDown network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\" successfully" Nov 1 02:41:51.038289 containerd[1506]: time="2025-11-01T02:41:51.035919064Z" level=info msg="StopPodSandbox for \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\" returns successfully" Nov 1 02:41:51.039740 systemd[1]: run-netns-cni\x2d09d8c9e1\x2dcaac\x2d7e6f\x2ddd96\x2d5f1208b1b6cb.mount: Deactivated successfully. Nov 1 02:41:51.042578 containerd[1506]: time="2025-11-01T02:41:51.042400464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-bvnkc,Uid:a1c04a34-b552-49b3-a9dc-198853e53df9,Namespace:calico-apiserver,Attempt:1,}" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:50.962 [INFO][4042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:50.962 [INFO][4042] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" iface="eth0" netns="/var/run/netns/cni-be40e8e4-10eb-a90b-54e7-801406900d38" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:50.966 [INFO][4042] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" iface="eth0" netns="/var/run/netns/cni-be40e8e4-10eb-a90b-54e7-801406900d38" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:50.971 [INFO][4042] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" iface="eth0" netns="/var/run/netns/cni-be40e8e4-10eb-a90b-54e7-801406900d38" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:50.973 [INFO][4042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:50.973 [INFO][4042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:51.014 [INFO][4060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:51.014 [INFO][4060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:51.024 [INFO][4060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:51.043 [WARNING][4060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:51.043 [INFO][4060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:51.045 [INFO][4060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:51.051948 containerd[1506]: 2025-11-01 02:41:51.049 [INFO][4042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:51.056811 containerd[1506]: time="2025-11-01T02:41:51.052089526Z" level=info msg="TearDown network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\" successfully" Nov 1 02:41:51.056811 containerd[1506]: time="2025-11-01T02:41:51.052121049Z" level=info msg="StopPodSandbox for \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\" returns successfully" Nov 1 02:41:51.056811 containerd[1506]: time="2025-11-01T02:41:51.055676622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qqz8,Uid:7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b,Namespace:kube-system,Attempt:1,}" Nov 1 02:41:51.058699 systemd[1]: run-netns-cni\x2dbe40e8e4\x2d10eb\x2da90b\x2d54e7\x2d801406900d38.mount: Deactivated successfully. Nov 1 02:41:51.314668 systemd[1]: Removed slice kubepods-besteffort-pod552d1b77_edb7_4e78_b15b_ddf34ab43f14.slice - libcontainer container kubepods-besteffort-pod552d1b77_edb7_4e78_b15b_ddf34ab43f14.slice. Nov 1 02:41:51.338792 systemd-networkd[1431]: cali786ed697162: Link UP Nov 1 02:41:51.339730 systemd-networkd[1431]: cali786ed697162: Gained carrier Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.139 [INFO][4077] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.163 [INFO][4077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0 coredns-66bc5c9577- kube-system 7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b 949 0 2025-11-01 02:41:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com coredns-66bc5c9577-9qqz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali786ed697162 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.163 [INFO][4077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.213 [INFO][4095] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" HandleID="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.213 [INFO][4095] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" HandleID="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101a30), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-liqqm.gb1.brightbox.com", "pod":"coredns-66bc5c9577-9qqz8", "timestamp":"2025-11-01 02:41:51.213471897 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.214 [INFO][4095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.214 [INFO][4095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.214 [INFO][4095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.225 [INFO][4095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.244 [INFO][4095] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.251 [INFO][4095] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.254 [INFO][4095] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.258 [INFO][4095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.258 [INFO][4095] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.260 [INFO][4095] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8 Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.267 [INFO][4095] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.279 [INFO][4095] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.1/26] block=192.168.84.0/26 handle="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.279 [INFO][4095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.1/26] handle="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.279 [INFO][4095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:51.402288 containerd[1506]: 2025-11-01 02:41:51.279 [INFO][4095] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.1/26] IPv6=[] ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" HandleID="k8s-pod-network.ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.409584 containerd[1506]: 2025-11-01 02:41:51.291 [INFO][4077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-9qqz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali786ed697162", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:51.409584 containerd[1506]: 2025-11-01 02:41:51.291 [INFO][4077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.1/32] ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.409584 containerd[1506]: 2025-11-01 02:41:51.293 [INFO][4077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali786ed697162 ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.409584 containerd[1506]: 2025-11-01 02:41:51.340 [INFO][4077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.409584 containerd[1506]: 2025-11-01 02:41:51.341 [INFO][4077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8", Pod:"coredns-66bc5c9577-9qqz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali786ed697162", MAC:"62:54:44:a3:7c:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:51.410091 containerd[1506]: 2025-11-01 02:41:51.392 [INFO][4077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8" Namespace="kube-system" Pod="coredns-66bc5c9577-9qqz8" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:51.507285 systemd[1]: Created slice kubepods-besteffort-pod0e9edcbd_6bb9_40da_89e3_329c8ee490a3.slice - libcontainer container kubepods-besteffort-pod0e9edcbd_6bb9_40da_89e3_329c8ee490a3.slice. Nov 1 02:41:51.512807 containerd[1506]: time="2025-11-01T02:41:51.511846592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:51.512807 containerd[1506]: time="2025-11-01T02:41:51.512397316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:51.512807 containerd[1506]: time="2025-11-01T02:41:51.512529244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:51.515696 containerd[1506]: time="2025-11-01T02:41:51.515304278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:51.596621 systemd-networkd[1431]: cali1de657e6e56: Link UP Nov 1 02:41:51.614347 systemd-networkd[1431]: cali1de657e6e56: Gained carrier Nov 1 02:41:51.657677 systemd[1]: Started cri-containerd-ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8.scope - libcontainer container ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8. Nov 1 02:41:51.666657 kubelet[2684]: I1101 02:41:51.666577 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e9edcbd-6bb9-40da-89e3-329c8ee490a3-whisker-ca-bundle\") pod \"whisker-564f484f47-dnhd7\" (UID: \"0e9edcbd-6bb9-40da-89e3-329c8ee490a3\") " pod="calico-system/whisker-564f484f47-dnhd7" Nov 1 02:41:51.672060 kubelet[2684]: I1101 02:41:51.667289 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdmcm\" (UniqueName: \"kubernetes.io/projected/0e9edcbd-6bb9-40da-89e3-329c8ee490a3-kube-api-access-tdmcm\") pod \"whisker-564f484f47-dnhd7\" (UID: \"0e9edcbd-6bb9-40da-89e3-329c8ee490a3\") " pod="calico-system/whisker-564f484f47-dnhd7" Nov 1 02:41:51.672060 kubelet[2684]: I1101 02:41:51.667370 2684 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e9edcbd-6bb9-40da-89e3-329c8ee490a3-whisker-backend-key-pair\") pod \"whisker-564f484f47-dnhd7\" (UID: \"0e9edcbd-6bb9-40da-89e3-329c8ee490a3\") " pod="calico-system/whisker-564f484f47-dnhd7" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.139 [INFO][4068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.159 [INFO][4068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0 calico-apiserver-6f8dc58755- calico-apiserver a1c04a34-b552-49b3-a9dc-198853e53df9 948 0 2025-11-01 02:41:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8dc58755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com calico-apiserver-6f8dc58755-bvnkc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1de657e6e56 [] [] }} ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.159 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.213 [INFO][4094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" HandleID="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.214 [INFO][4094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" HandleID="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-liqqm.gb1.brightbox.com", "pod":"calico-apiserver-6f8dc58755-bvnkc", "timestamp":"2025-11-01 02:41:51.213950117 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.214 [INFO][4094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.281 [INFO][4094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.282 [INFO][4094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.327 [INFO][4094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.383 [INFO][4094] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.417 [INFO][4094] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.429 [INFO][4094] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.442 [INFO][4094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.442 [INFO][4094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.450 [INFO][4094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5 Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.463 [INFO][4094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.548 [INFO][4094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.2/26] block=192.168.84.0/26 handle="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.548 [INFO][4094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.2/26] handle="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.548 [INFO][4094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:51.716981 containerd[1506]: 2025-11-01 02:41:51.548 [INFO][4094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.2/26] IPv6=[] ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" HandleID="k8s-pod-network.5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.720164 containerd[1506]: 2025-11-01 02:41:51.560 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1c04a34-b552-49b3-a9dc-198853e53df9", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6f8dc58755-bvnkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1de657e6e56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:51.720164 containerd[1506]: 2025-11-01 02:41:51.562 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.2/32] ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.720164 containerd[1506]: 2025-11-01 02:41:51.563 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1de657e6e56 ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.720164 containerd[1506]: 2025-11-01 02:41:51.615 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.720164 containerd[1506]: 2025-11-01 02:41:51.615 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1c04a34-b552-49b3-a9dc-198853e53df9", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5", Pod:"calico-apiserver-6f8dc58755-bvnkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1de657e6e56", MAC:"aa:6c:83:31:9d:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:51.720164 containerd[1506]: 2025-11-01 02:41:51.710 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-bvnkc" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:51.823689 containerd[1506]: time="2025-11-01T02:41:51.823529062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:51.823910 containerd[1506]: time="2025-11-01T02:41:51.823650130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:51.823910 containerd[1506]: time="2025-11-01T02:41:51.823671299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:51.823910 containerd[1506]: time="2025-11-01T02:41:51.823805915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:51.881905 kubelet[2684]: I1101 02:41:51.881481 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552d1b77-edb7-4e78-b15b-ddf34ab43f14" path="/var/lib/kubelet/pods/552d1b77-edb7-4e78-b15b-ddf34ab43f14/volumes" Nov 1 02:41:51.930714 systemd[1]: Started cri-containerd-5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5.scope - libcontainer container 5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5. Nov 1 02:41:51.969099 containerd[1506]: time="2025-11-01T02:41:51.969016790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9qqz8,Uid:7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b,Namespace:kube-system,Attempt:1,} returns sandbox id \"ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8\"" Nov 1 02:41:51.981979 containerd[1506]: time="2025-11-01T02:41:51.981877221Z" level=info msg="CreateContainer within sandbox \"ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 02:41:52.020739 containerd[1506]: time="2025-11-01T02:41:52.020679291Z" level=info msg="CreateContainer within sandbox \"ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"043a46faac2e432bf5c1c63eeca7f5c7e89efb5c8f2b309ba15aa980f7224f32\"" Nov 1 02:41:52.022039 containerd[1506]: time="2025-11-01T02:41:52.021989329Z" level=info msg="StartContainer for \"043a46faac2e432bf5c1c63eeca7f5c7e89efb5c8f2b309ba15aa980f7224f32\"" Nov 1 02:41:52.098800 systemd[1]: Started cri-containerd-043a46faac2e432bf5c1c63eeca7f5c7e89efb5c8f2b309ba15aa980f7224f32.scope - libcontainer container 043a46faac2e432bf5c1c63eeca7f5c7e89efb5c8f2b309ba15aa980f7224f32. Nov 1 02:41:52.121560 containerd[1506]: time="2025-11-01T02:41:52.120110280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564f484f47-dnhd7,Uid:0e9edcbd-6bb9-40da-89e3-329c8ee490a3,Namespace:calico-system,Attempt:0,}" Nov 1 02:41:52.204076 containerd[1506]: time="2025-11-01T02:41:52.203926436Z" level=info msg="StartContainer for \"043a46faac2e432bf5c1c63eeca7f5c7e89efb5c8f2b309ba15aa980f7224f32\" returns successfully" Nov 1 02:41:52.229467 containerd[1506]: time="2025-11-01T02:41:52.227097163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-bvnkc,Uid:a1c04a34-b552-49b3-a9dc-198853e53df9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5\"" Nov 1 02:41:52.244469 containerd[1506]: time="2025-11-01T02:41:52.243767752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:41:52.324557 kubelet[2684]: I1101 02:41:52.324207 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9qqz8" podStartSLOduration=49.324181964 podStartE2EDuration="49.324181964s" podCreationTimestamp="2025-11-01 02:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:41:52.321686876 +0000 UTC m=+54.769457113" watchObservedRunningTime="2025-11-01 02:41:52.324181964 +0000 UTC m=+54.771952194" Nov 1 02:41:52.485525 systemd-networkd[1431]: calia297376070a: Link UP Nov 1 02:41:52.493151 systemd-networkd[1431]: calia297376070a: Gained carrier Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.254 [INFO][4302] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.284 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0 whisker-564f484f47- calico-system 0e9edcbd-6bb9-40da-89e3-329c8ee490a3 968 0 2025-11-01 02:41:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:564f484f47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com whisker-564f484f47-dnhd7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia297376070a [] [] }} ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.285 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.396 [INFO][4339] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" HandleID="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.396 [INFO][4339] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" HandleID="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003338d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-liqqm.gb1.brightbox.com", "pod":"whisker-564f484f47-dnhd7", "timestamp":"2025-11-01 02:41:52.396020061 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.396 [INFO][4339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.396 [INFO][4339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.396 [INFO][4339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.414 [INFO][4339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.422 [INFO][4339] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.433 [INFO][4339] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.438 [INFO][4339] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.442 [INFO][4339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.442 [INFO][4339] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.445 [INFO][4339] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09 Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.459 [INFO][4339] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.472 [INFO][4339] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.3/26] block=192.168.84.0/26 handle="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.472 [INFO][4339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.3/26] handle="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.472 [INFO][4339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:52.574499 containerd[1506]: 2025-11-01 02:41:52.472 [INFO][4339] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.3/26] IPv6=[] ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" HandleID="k8s-pod-network.c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" Nov 1 02:41:52.586324 containerd[1506]: 2025-11-01 02:41:52.476 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0", GenerateName:"whisker-564f484f47-", Namespace:"calico-system", SelfLink:"", UID:"0e9edcbd-6bb9-40da-89e3-329c8ee490a3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"564f484f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"whisker-564f484f47-dnhd7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia297376070a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:52.586324 containerd[1506]: 2025-11-01 02:41:52.477 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.3/32] ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" Nov 1 02:41:52.586324 containerd[1506]: 2025-11-01 02:41:52.477 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia297376070a ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" Nov 1 02:41:52.586324 containerd[1506]: 2025-11-01 02:41:52.497 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" Nov 1 02:41:52.586324 containerd[1506]: 2025-11-01 02:41:52.500 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0", GenerateName:"whisker-564f484f47-", Namespace:"calico-system", SelfLink:"", UID:"0e9edcbd-6bb9-40da-89e3-329c8ee490a3", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"564f484f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09", Pod:"whisker-564f484f47-dnhd7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.84.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia297376070a", MAC:"e2:87:7b:ae:72:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:52.586324 containerd[1506]: 2025-11-01 02:41:52.544 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09" Namespace="calico-system" Pod="whisker-564f484f47-dnhd7" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--564f484f47--dnhd7-eth0" Nov 1 02:41:52.605865 containerd[1506]: time="2025-11-01T02:41:52.605563426Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:41:52.621904 containerd[1506]: time="2025-11-01T02:41:52.611069857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:41:52.621904 containerd[1506]: time="2025-11-01T02:41:52.620558082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:41:52.633417 kubelet[2684]: E1101 02:41:52.628381 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:41:52.636320 kubelet[2684]: E1101 02:41:52.635858 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:41:52.674580 kubelet[2684]: E1101 02:41:52.666133 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-bvnkc_calico-apiserver(a1c04a34-b552-49b3-a9dc-198853e53df9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:41:52.676783 kubelet[2684]: E1101 02:41:52.676192 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:41:52.705561 containerd[1506]: time="2025-11-01T02:41:52.702661506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:52.705561 containerd[1506]: time="2025-11-01T02:41:52.702833355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:52.705561 containerd[1506]: time="2025-11-01T02:41:52.702860669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:52.707220 containerd[1506]: time="2025-11-01T02:41:52.703053830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:52.781727 systemd[1]: Started cri-containerd-c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09.scope - libcontainer container c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09. Nov 1 02:41:52.812825 systemd-networkd[1431]: cali1de657e6e56: Gained IPv6LL Nov 1 02:41:52.858495 containerd[1506]: time="2025-11-01T02:41:52.858423033Z" level=info msg="StopPodSandbox for \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\"" Nov 1 02:41:53.062680 systemd-networkd[1431]: cali786ed697162: Gained IPv6LL Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:52.965 [INFO][4422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:52.965 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" iface="eth0" netns="/var/run/netns/cni-b125c598-a2b2-d170-fecc-3efa9af97223" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:52.966 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" iface="eth0" netns="/var/run/netns/cni-b125c598-a2b2-d170-fecc-3efa9af97223" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:52.966 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" iface="eth0" netns="/var/run/netns/cni-b125c598-a2b2-d170-fecc-3efa9af97223" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:52.966 [INFO][4422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:52.966 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:53.039 [INFO][4430] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:53.040 [INFO][4430] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:53.040 [INFO][4430] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:53.056 [WARNING][4430] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:53.056 [INFO][4430] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:53.059 [INFO][4430] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:53.069325 containerd[1506]: 2025-11-01 02:41:53.066 [INFO][4422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:53.074060 containerd[1506]: time="2025-11-01T02:41:53.071078953Z" level=info msg="TearDown network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\" successfully" Nov 1 02:41:53.074060 containerd[1506]: time="2025-11-01T02:41:53.071132027Z" level=info msg="StopPodSandbox for \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\" returns successfully" Nov 1 02:41:53.077487 systemd[1]: run-netns-cni\x2db125c598\x2da2b2\x2dd170\x2dfecc\x2d3efa9af97223.mount: Deactivated successfully. Nov 1 02:41:53.083040 containerd[1506]: time="2025-11-01T02:41:53.081278750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-gnqxq,Uid:b426dedc-f58e-4d16-a987-3056f24fa4d7,Namespace:calico-apiserver,Attempt:1,}" Nov 1 02:41:53.153067 containerd[1506]: time="2025-11-01T02:41:53.152971562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-564f484f47-dnhd7,Uid:0e9edcbd-6bb9-40da-89e3-329c8ee490a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4187146bcb0c9b0adb15373658b6bff7aa7df21dce59b30a94139a75dc04d09\"" Nov 1 02:41:53.158046 containerd[1506]: time="2025-11-01T02:41:53.157964348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 02:41:53.312509 kubelet[2684]: E1101 02:41:53.312400 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:41:53.435822 systemd-networkd[1431]: cali0373ae4e081: Link UP Nov 1 02:41:53.437488 systemd-networkd[1431]: cali0373ae4e081: Gained carrier Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.188 [INFO][4444] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.215 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0 calico-apiserver-6f8dc58755- calico-apiserver b426dedc-f58e-4d16-a987-3056f24fa4d7 987 0 2025-11-01 02:41:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f8dc58755 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com calico-apiserver-6f8dc58755-gnqxq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0373ae4e081 [] [] }} ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.215 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.313 [INFO][4455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" HandleID="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.314 [INFO][4455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" HandleID="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000306160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-liqqm.gb1.brightbox.com", "pod":"calico-apiserver-6f8dc58755-gnqxq", "timestamp":"2025-11-01 02:41:53.313358266 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.314 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.314 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.314 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.333 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.351 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.371 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.378 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.385 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.385 [INFO][4455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.389 [INFO][4455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.409 [INFO][4455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.420 [INFO][4455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.4/26] block=192.168.84.0/26 handle="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.420 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.4/26] handle="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.420 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:53.470684 containerd[1506]: 2025-11-01 02:41:53.420 [INFO][4455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.4/26] IPv6=[] ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" HandleID="k8s-pod-network.74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.473780 containerd[1506]: 2025-11-01 02:41:53.424 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"b426dedc-f58e-4d16-a987-3056f24fa4d7", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6f8dc58755-gnqxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0373ae4e081", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:53.473780 containerd[1506]: 2025-11-01 02:41:53.424 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.4/32] ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.473780 containerd[1506]: 2025-11-01 02:41:53.424 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0373ae4e081 ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.473780 containerd[1506]: 2025-11-01 02:41:53.443 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.473780 containerd[1506]: 2025-11-01 02:41:53.444 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"b426dedc-f58e-4d16-a987-3056f24fa4d7", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e", Pod:"calico-apiserver-6f8dc58755-gnqxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0373ae4e081", MAC:"46:6c:1e:1e:8f:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:53.473780 containerd[1506]: 2025-11-01 02:41:53.461 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e" Namespace="calico-apiserver" Pod="calico-apiserver-6f8dc58755-gnqxq" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:53.493339 containerd[1506]: time="2025-11-01T02:41:53.492834117Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:41:53.495511 containerd[1506]: time="2025-11-01T02:41:53.495457090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 02:41:53.495972 containerd[1506]: time="2025-11-01T02:41:53.495473172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 02:41:53.496063 kubelet[2684]: E1101 02:41:53.495949 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:41:53.496063 kubelet[2684]: E1101 02:41:53.496043 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:41:53.496809 kubelet[2684]: E1101 02:41:53.496218 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 02:41:53.499494 containerd[1506]: time="2025-11-01T02:41:53.499455798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 02:41:53.556639 containerd[1506]: time="2025-11-01T02:41:53.555869610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:53.556639 containerd[1506]: time="2025-11-01T02:41:53.556332499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:53.556639 containerd[1506]: time="2025-11-01T02:41:53.556400257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:53.559383 containerd[1506]: time="2025-11-01T02:41:53.557863329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:53.617665 systemd[1]: Started cri-containerd-74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e.scope - libcontainer container 74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e. Nov 1 02:41:53.754599 containerd[1506]: time="2025-11-01T02:41:53.752879982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f8dc58755-gnqxq,Uid:b426dedc-f58e-4d16-a987-3056f24fa4d7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e\"" Nov 1 02:41:53.815675 containerd[1506]: time="2025-11-01T02:41:53.815616109Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:41:53.818210 containerd[1506]: time="2025-11-01T02:41:53.818125017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 02:41:53.818381 containerd[1506]: time="2025-11-01T02:41:53.818282476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 02:41:53.818600 kubelet[2684]: E1101 02:41:53.818542 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:41:53.819514 kubelet[2684]: E1101 02:41:53.818614 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:41:53.819717 containerd[1506]: time="2025-11-01T02:41:53.819662638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:41:53.828507 kubelet[2684]: E1101 02:41:53.828458 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 02:41:53.828860 kubelet[2684]: E1101 02:41:53.828547 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:41:53.860547 containerd[1506]: time="2025-11-01T02:41:53.860496431Z" level=info msg="StopPodSandbox for \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\"" Nov 1 02:41:53.863991 containerd[1506]: time="2025-11-01T02:41:53.863717085Z" level=info msg="StopPodSandbox for \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\"" Nov 1 02:41:53.865380 containerd[1506]: time="2025-11-01T02:41:53.865337816Z" level=info msg="StopPodSandbox for \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\"" Nov 1 02:41:54.132494 containerd[1506]: time="2025-11-01T02:41:54.132205332Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:41:54.136106 containerd[1506]: time="2025-11-01T02:41:54.136041561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:41:54.136235 containerd[1506]: time="2025-11-01T02:41:54.136182287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:41:54.136919 kubelet[2684]: E1101 02:41:54.136857 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:41:54.137099 kubelet[2684]: E1101 02:41:54.137055 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:41:54.138465 kubelet[2684]: E1101 02:41:54.137656 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-gnqxq_calico-apiserver(b426dedc-f58e-4d16-a987-3056f24fa4d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:41:54.138465 kubelet[2684]: E1101 02:41:54.137728 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:53.995 [INFO][4552] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:53.996 [INFO][4552] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" iface="eth0" netns="/var/run/netns/cni-940325d2-4dd8-0add-3096-99ea40358673" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.010 [INFO][4552] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" iface="eth0" netns="/var/run/netns/cni-940325d2-4dd8-0add-3096-99ea40358673" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.013 [INFO][4552] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" iface="eth0" netns="/var/run/netns/cni-940325d2-4dd8-0add-3096-99ea40358673" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.014 [INFO][4552] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.017 [INFO][4552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.133 [INFO][4566] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.135 [INFO][4566] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.135 [INFO][4566] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.166 [WARNING][4566] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.167 [INFO][4566] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.177 [INFO][4566] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:54.190232 containerd[1506]: 2025-11-01 02:41:54.183 [INFO][4552] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:54.193661 containerd[1506]: time="2025-11-01T02:41:54.193615086Z" level=info msg="TearDown network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\" successfully" Nov 1 02:41:54.193860 containerd[1506]: time="2025-11-01T02:41:54.193822636Z" level=info msg="StopPodSandbox for \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\" returns successfully" Nov 1 02:41:54.202537 containerd[1506]: time="2025-11-01T02:41:54.202217074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzkkp,Uid:2f47046c-5cfe-45c6-8991-37f59ad744e0,Namespace:kube-system,Attempt:1,}" Nov 1 02:41:54.203645 systemd[1]: run-netns-cni\x2d940325d2\x2d4dd8\x2d0add\x2d3096\x2d99ea40358673.mount: Deactivated successfully. Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.073 [INFO][4544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.081 [INFO][4544] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" iface="eth0" netns="/var/run/netns/cni-cd8f8e46-368a-0397-3b5e-84898eed1521" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.081 [INFO][4544] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" iface="eth0" netns="/var/run/netns/cni-cd8f8e46-368a-0397-3b5e-84898eed1521" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.082 [INFO][4544] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" iface="eth0" netns="/var/run/netns/cni-cd8f8e46-368a-0397-3b5e-84898eed1521" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.082 [INFO][4544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.082 [INFO][4544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.199 [INFO][4573] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.200 [INFO][4573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.200 [INFO][4573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.218 [WARNING][4573] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.219 [INFO][4573] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.226 [INFO][4573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:54.239429 containerd[1506]: 2025-11-01 02:41:54.228 [INFO][4544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:54.239429 containerd[1506]: time="2025-11-01T02:41:54.238850952Z" level=info msg="TearDown network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\" successfully" Nov 1 02:41:54.239429 containerd[1506]: time="2025-11-01T02:41:54.238884282Z" level=info msg="StopPodSandbox for \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\" returns successfully" Nov 1 02:41:54.245470 containerd[1506]: time="2025-11-01T02:41:54.244374457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cd6d684-rxmrt,Uid:efbd1db3-4d1b-4800-b03d-ce570a8bfb0d,Namespace:calico-system,Attempt:1,}" Nov 1 02:41:54.249298 systemd[1]: run-netns-cni\x2dcd8f8e46\x2d368a\x2d0397\x2d3b5e\x2d84898eed1521.mount: Deactivated successfully. Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.080 [INFO][4545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.081 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" iface="eth0" netns="/var/run/netns/cni-cb911a5d-6d34-42e8-912f-d8006a4e7bf3" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.083 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" iface="eth0" netns="/var/run/netns/cni-cb911a5d-6d34-42e8-912f-d8006a4e7bf3" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.089 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" iface="eth0" netns="/var/run/netns/cni-cb911a5d-6d34-42e8-912f-d8006a4e7bf3" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.089 [INFO][4545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.090 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.277 [INFO][4578] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.277 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.278 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.295 [WARNING][4578] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.296 [INFO][4578] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.300 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:54.312271 containerd[1506]: 2025-11-01 02:41:54.304 [INFO][4545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:54.313697 containerd[1506]: time="2025-11-01T02:41:54.312704872Z" level=info msg="TearDown network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\" successfully" Nov 1 02:41:54.313697 containerd[1506]: time="2025-11-01T02:41:54.312777209Z" level=info msg="StopPodSandbox for \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\" returns successfully" Nov 1 02:41:54.320182 containerd[1506]: time="2025-11-01T02:41:54.317807707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvm6v,Uid:2db811c5-1134-445e-9e39-ac0e7ee1b427,Namespace:calico-system,Attempt:1,}" Nov 1 02:41:54.324893 kubelet[2684]: E1101 02:41:54.324633 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:41:54.331658 kubelet[2684]: E1101 02:41:54.331506 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:41:54.406778 systemd-networkd[1431]: calia297376070a: Gained IPv6LL Nov 1 02:41:54.526488 kernel: bpftool[4667]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 02:41:54.597902 systemd[1]: run-netns-cni\x2dcb911a5d\x2d6d34\x2d42e8\x2d912f\x2dd8006a4e7bf3.mount: Deactivated successfully. Nov 1 02:41:54.688432 systemd-networkd[1431]: calica33833be06: Link UP Nov 1 02:41:54.696136 systemd-networkd[1431]: calica33833be06: Gained carrier Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.388 [INFO][4589] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0 coredns-66bc5c9577- kube-system 2f47046c-5cfe-45c6-8991-37f59ad744e0 1013 0 2025-11-01 02:41:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com coredns-66bc5c9577-kzkkp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica33833be06 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.388 [INFO][4589] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.557 [INFO][4629] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" HandleID="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.559 [INFO][4629] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" HandleID="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-liqqm.gb1.brightbox.com", "pod":"coredns-66bc5c9577-kzkkp", "timestamp":"2025-11-01 02:41:54.557166606 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.559 [INFO][4629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.559 [INFO][4629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.559 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.577 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.604 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.625 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.630 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.635 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.635 [INFO][4629] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.641 [INFO][4629] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20 Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.659 [INFO][4629] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.673 [INFO][4629] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.5/26] block=192.168.84.0/26 handle="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.673 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.5/26] handle="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.673 [INFO][4629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:54.729522 containerd[1506]: 2025-11-01 02:41:54.674 [INFO][4629] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.5/26] IPv6=[] ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" HandleID="k8s-pod-network.1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.736098 containerd[1506]: 2025-11-01 02:41:54.680 [INFO][4589] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2f47046c-5cfe-45c6-8991-37f59ad744e0", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-kzkkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica33833be06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:54.736098 containerd[1506]: 2025-11-01 02:41:54.681 [INFO][4589] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.5/32] ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.736098 containerd[1506]: 2025-11-01 02:41:54.681 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica33833be06 ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.736098 containerd[1506]: 2025-11-01 02:41:54.699 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.736098 containerd[1506]: 2025-11-01 02:41:54.702 [INFO][4589] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2f47046c-5cfe-45c6-8991-37f59ad744e0", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20", Pod:"coredns-66bc5c9577-kzkkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica33833be06", MAC:"52:06:71:c0:8c:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:54.739990 containerd[1506]: 2025-11-01 02:41:54.720 [INFO][4589] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20" Namespace="kube-system" Pod="coredns-66bc5c9577-kzkkp" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:54.790141 containerd[1506]: time="2025-11-01T02:41:54.789847658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:54.790141 containerd[1506]: time="2025-11-01T02:41:54.789936621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:54.790141 containerd[1506]: time="2025-11-01T02:41:54.789954611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:54.791506 containerd[1506]: time="2025-11-01T02:41:54.790078834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:54.831539 systemd[1]: Started cri-containerd-1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20.scope - libcontainer container 1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20. Nov 1 02:41:54.836973 systemd-networkd[1431]: cali791b4a20c49: Link UP Nov 1 02:41:54.838227 systemd-networkd[1431]: cali791b4a20c49: Gained carrier Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.425 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0 calico-kube-controllers-754cd6d684- calico-system efbd1db3-4d1b-4800-b03d-ce570a8bfb0d 1014 0 2025-11-01 02:41:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:754cd6d684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com calico-kube-controllers-754cd6d684-rxmrt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali791b4a20c49 [] [] }} ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.425 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.586 [INFO][4641] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" HandleID="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.587 [INFO][4641] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" HandleID="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f100), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-liqqm.gb1.brightbox.com", "pod":"calico-kube-controllers-754cd6d684-rxmrt", "timestamp":"2025-11-01 02:41:54.586596094 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.587 [INFO][4641] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.675 [INFO][4641] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.675 [INFO][4641] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.709 [INFO][4641] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.732 [INFO][4641] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.759 [INFO][4641] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.765 [INFO][4641] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.772 [INFO][4641] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.772 [INFO][4641] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.775 [INFO][4641] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.808 [INFO][4641] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.824 [INFO][4641] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.6/26] block=192.168.84.0/26 handle="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.824 [INFO][4641] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.6/26] handle="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.824 [INFO][4641] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:54.865178 containerd[1506]: 2025-11-01 02:41:54.824 [INFO][4641] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.6/26] IPv6=[] ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" HandleID="k8s-pod-network.df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.869891 containerd[1506]: 2025-11-01 02:41:54.829 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0", GenerateName:"calico-kube-controllers-754cd6d684-", Namespace:"calico-system", SelfLink:"", UID:"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754cd6d684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-754cd6d684-rxmrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791b4a20c49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:54.869891 containerd[1506]: 2025-11-01 02:41:54.830 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.6/32] ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.869891 containerd[1506]: 2025-11-01 02:41:54.830 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali791b4a20c49 ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.869891 containerd[1506]: 2025-11-01 02:41:54.838 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.869891 containerd[1506]: 2025-11-01 02:41:54.839 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0", GenerateName:"calico-kube-controllers-754cd6d684-", Namespace:"calico-system", SelfLink:"", UID:"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754cd6d684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a", Pod:"calico-kube-controllers-754cd6d684-rxmrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791b4a20c49", MAC:"ce:73:30:37:ce:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:54.869891 containerd[1506]: 2025-11-01 02:41:54.861 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a" Namespace="calico-system" Pod="calico-kube-controllers-754cd6d684-rxmrt" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:54.922691 containerd[1506]: time="2025-11-01T02:41:54.922113305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:54.922691 containerd[1506]: time="2025-11-01T02:41:54.922208577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:54.922691 containerd[1506]: time="2025-11-01T02:41:54.922233544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:54.923306 containerd[1506]: time="2025-11-01T02:41:54.922375555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:54.998707 systemd[1]: Started cri-containerd-df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a.scope - libcontainer container df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a. Nov 1 02:41:55.013643 systemd-networkd[1431]: calibffa4db4070: Link UP Nov 1 02:41:55.016602 systemd-networkd[1431]: calibffa4db4070: Gained carrier Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.526 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0 csi-node-driver- calico-system 2db811c5-1134-445e-9e39-ac0e7ee1b427 1015 0 2025-11-01 02:41:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com csi-node-driver-gvm6v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibffa4db4070 [] [] }} ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.527 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.657 [INFO][4670] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" HandleID="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.658 [INFO][4670] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" HandleID="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f340), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-liqqm.gb1.brightbox.com", "pod":"csi-node-driver-gvm6v", "timestamp":"2025-11-01 02:41:54.65751761 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.658 [INFO][4670] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.827 [INFO][4670] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.827 [INFO][4670] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.901 [INFO][4670] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.912 [INFO][4670] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.950 [INFO][4670] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.959 [INFO][4670] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.965 [INFO][4670] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.965 [INFO][4670] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.968 [INFO][4670] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96 Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.978 [INFO][4670] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.994 [INFO][4670] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.7/26] block=192.168.84.0/26 handle="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.994 [INFO][4670] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.7/26] handle="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.994 [INFO][4670] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:55.077447 containerd[1506]: 2025-11-01 02:41:54.994 [INFO][4670] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.7/26] IPv6=[] ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" HandleID="k8s-pod-network.9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:55.082506 containerd[1506]: 2025-11-01 02:41:55.001 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db811c5-1134-445e-9e39-ac0e7ee1b427", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-gvm6v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibffa4db4070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:55.082506 containerd[1506]: 2025-11-01 02:41:55.002 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.7/32] ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:55.082506 containerd[1506]: 2025-11-01 02:41:55.002 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibffa4db4070 ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:55.082506 containerd[1506]: 2025-11-01 02:41:55.018 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:55.082506 containerd[1506]: 2025-11-01 02:41:55.027 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db811c5-1134-445e-9e39-ac0e7ee1b427", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96", Pod:"csi-node-driver-gvm6v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibffa4db4070", MAC:"76:5f:8c:77:a5:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:55.082506 containerd[1506]: 2025-11-01 02:41:55.068 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96" Namespace="calico-system" Pod="csi-node-driver-gvm6v" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:55.082506 containerd[1506]: time="2025-11-01T02:41:55.082475224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kzkkp,Uid:2f47046c-5cfe-45c6-8991-37f59ad744e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20\"" Nov 1 02:41:55.100065 containerd[1506]: time="2025-11-01T02:41:55.099715671Z" level=info msg="CreateContainer within sandbox \"1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 02:41:55.121838 containerd[1506]: time="2025-11-01T02:41:55.121384864Z" level=info msg="CreateContainer within sandbox \"1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29a1613ff9cfe52cf08845fb87d4e140f7bb4339c6b6cfca18810e0fbea5a7e9\"" Nov 1 02:41:55.126548 containerd[1506]: time="2025-11-01T02:41:55.125053547Z" level=info msg="StartContainer for \"29a1613ff9cfe52cf08845fb87d4e140f7bb4339c6b6cfca18810e0fbea5a7e9\"" Nov 1 02:41:55.175561 containerd[1506]: time="2025-11-01T02:41:55.173942227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:41:55.175561 containerd[1506]: time="2025-11-01T02:41:55.174059151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:41:55.189879 containerd[1506]: time="2025-11-01T02:41:55.174183155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:55.189879 containerd[1506]: time="2025-11-01T02:41:55.189776731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:41:55.248628 systemd[1]: Started cri-containerd-9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96.scope - libcontainer container 9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96. Nov 1 02:41:55.260727 systemd[1]: Started cri-containerd-29a1613ff9cfe52cf08845fb87d4e140f7bb4339c6b6cfca18810e0fbea5a7e9.scope - libcontainer container 29a1613ff9cfe52cf08845fb87d4e140f7bb4339c6b6cfca18810e0fbea5a7e9. Nov 1 02:41:55.289952 containerd[1506]: time="2025-11-01T02:41:55.288560767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-754cd6d684-rxmrt,Uid:efbd1db3-4d1b-4800-b03d-ce570a8bfb0d,Namespace:calico-system,Attempt:1,} returns sandbox id \"df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a\"" Nov 1 02:41:55.297654 containerd[1506]: time="2025-11-01T02:41:55.297600923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 02:41:55.302685 systemd-networkd[1431]: cali0373ae4e081: Gained IPv6LL Nov 1 02:41:55.384875 containerd[1506]: time="2025-11-01T02:41:55.384815696Z" level=info msg="StartContainer for \"29a1613ff9cfe52cf08845fb87d4e140f7bb4339c6b6cfca18810e0fbea5a7e9\" returns successfully" Nov 1 02:41:55.401236 kubelet[2684]: E1101 02:41:55.401164 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:41:55.424164 containerd[1506]: time="2025-11-01T02:41:55.423933774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gvm6v,Uid:2db811c5-1134-445e-9e39-ac0e7ee1b427,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96\"" Nov 1 02:41:55.608259 systemd-networkd[1431]: vxlan.calico: Link UP Nov 1 02:41:55.608274 systemd-networkd[1431]: vxlan.calico: Gained carrier Nov 1 02:41:55.648226 containerd[1506]: time="2025-11-01T02:41:55.648044436Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:41:55.649830 containerd[1506]: time="2025-11-01T02:41:55.649577751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 02:41:55.650282 containerd[1506]: time="2025-11-01T02:41:55.649655695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 02:41:55.650406 kubelet[2684]: E1101 02:41:55.650277 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:41:55.650406 kubelet[2684]: E1101 02:41:55.650343 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:41:55.651215 kubelet[2684]: E1101 02:41:55.650569 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-754cd6d684-rxmrt_calico-system(efbd1db3-4d1b-4800-b03d-ce570a8bfb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 02:41:55.651215 kubelet[2684]: E1101 02:41:55.650630 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:41:55.651409 containerd[1506]: time="2025-11-01T02:41:55.650783673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 02:41:55.751620 systemd-networkd[1431]: calica33833be06: Gained IPv6LL Nov 1 02:41:55.983097 containerd[1506]: time="2025-11-01T02:41:55.982695994Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:41:55.985332 containerd[1506]: time="2025-11-01T02:41:55.985194525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 02:41:55.985332 containerd[1506]: time="2025-11-01T02:41:55.985268537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 02:41:55.986082 kubelet[2684]: E1101 02:41:55.985576 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:41:55.986082 kubelet[2684]: E1101 02:41:55.985740 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:41:55.986082 kubelet[2684]: E1101 02:41:55.985864 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 02:41:55.989069 containerd[1506]: time="2025-11-01T02:41:55.989029372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 02:41:56.300189 containerd[1506]: time="2025-11-01T02:41:56.298312338Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:41:56.303284 containerd[1506]: time="2025-11-01T02:41:56.300568897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 02:41:56.303284 containerd[1506]: time="2025-11-01T02:41:56.300687811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 02:41:56.303424 kubelet[2684]: E1101 02:41:56.301971 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:41:56.303424 kubelet[2684]: E1101 02:41:56.302071 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:41:56.303424 kubelet[2684]: E1101 02:41:56.302201 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 02:41:56.303735 kubelet[2684]: E1101 02:41:56.302323 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:56.417944 kubelet[2684]: E1101 02:41:56.417592 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:41:56.419099 kubelet[2684]: E1101 02:41:56.418770 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:56.454892 systemd-networkd[1431]: cali791b4a20c49: Gained IPv6LL Nov 1 02:41:56.464396 kubelet[2684]: I1101 02:41:56.463587 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kzkkp" podStartSLOduration=53.463550987 podStartE2EDuration="53.463550987s" podCreationTimestamp="2025-11-01 02:41:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:41:56.433490831 +0000 UTC m=+58.881261080" watchObservedRunningTime="2025-11-01 02:41:56.463550987 +0000 UTC m=+58.911321216" Nov 1 02:41:56.902922 systemd-networkd[1431]: calibffa4db4070: Gained IPv6LL Nov 1 02:41:57.030730 systemd-networkd[1431]: vxlan.calico: Gained IPv6LL Nov 1 02:41:57.420969 kubelet[2684]: E1101 02:41:57.420908 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:41:57.857716 containerd[1506]: time="2025-11-01T02:41:57.857664800Z" level=info msg="StopPodSandbox for \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\"" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.921 [WARNING][4969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2f47046c-5cfe-45c6-8991-37f59ad744e0", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20", Pod:"coredns-66bc5c9577-kzkkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica33833be06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.922 [INFO][4969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.922 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" iface="eth0" netns="" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.922 [INFO][4969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.923 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.976 [INFO][4978] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.976 [INFO][4978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.976 [INFO][4978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.989 [WARNING][4978] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.989 [INFO][4978] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.992 [INFO][4978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:57.998483 containerd[1506]: 2025-11-01 02:41:57.994 [INFO][4969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:57.998483 containerd[1506]: time="2025-11-01T02:41:57.997062506Z" level=info msg="TearDown network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\" successfully" Nov 1 02:41:57.998483 containerd[1506]: time="2025-11-01T02:41:57.997096414Z" level=info msg="StopPodSandbox for \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\" returns successfully" Nov 1 02:41:58.000130 containerd[1506]: time="2025-11-01T02:41:57.999379613Z" level=info msg="RemovePodSandbox for \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\"" Nov 1 02:41:58.006520 containerd[1506]: time="2025-11-01T02:41:58.006239978Z" level=info msg="Forcibly stopping sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\"" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.106 [WARNING][4993] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2f47046c-5cfe-45c6-8991-37f59ad744e0", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"1c5b797d84f110b28d96f58db3baab96628c33696e25a994df98ec6659a99a20", Pod:"coredns-66bc5c9577-kzkkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica33833be06", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.107 [INFO][4993] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.107 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" iface="eth0" netns="" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.107 [INFO][4993] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.107 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.147 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.148 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.148 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.159 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.159 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" HandleID="k8s-pod-network.b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--kzkkp-eth0" Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.161 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:58.166148 containerd[1506]: 2025-11-01 02:41:58.163 [INFO][4993] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149" Nov 1 02:41:58.167222 containerd[1506]: time="2025-11-01T02:41:58.166232106Z" level=info msg="TearDown network for sandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\" successfully" Nov 1 02:41:58.184509 containerd[1506]: time="2025-11-01T02:41:58.184375204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:41:58.184676 containerd[1506]: time="2025-11-01T02:41:58.184534047Z" level=info msg="RemovePodSandbox \"b6604897af74fe2df91ba61e482ed4157277ef8ade1e2b9ee8b604df1321d149\" returns successfully" Nov 1 02:41:58.185681 containerd[1506]: time="2025-11-01T02:41:58.185637192Z" level=info msg="StopPodSandbox for \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\"" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.246 [WARNING][5014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8", Pod:"coredns-66bc5c9577-9qqz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali786ed697162", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.246 [INFO][5014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.246 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" iface="eth0" netns="" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.246 [INFO][5014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.246 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.282 [INFO][5021] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.282 [INFO][5021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.282 [INFO][5021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.293 [WARNING][5021] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.293 [INFO][5021] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.295 [INFO][5021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:58.300030 containerd[1506]: 2025-11-01 02:41:58.297 [INFO][5014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.300030 containerd[1506]: time="2025-11-01T02:41:58.299785099Z" level=info msg="TearDown network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\" successfully" Nov 1 02:41:58.300030 containerd[1506]: time="2025-11-01T02:41:58.299844987Z" level=info msg="StopPodSandbox for \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\" returns successfully" Nov 1 02:41:58.301930 containerd[1506]: time="2025-11-01T02:41:58.301374318Z" level=info msg="RemovePodSandbox for \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\"" Nov 1 02:41:58.301930 containerd[1506]: time="2025-11-01T02:41:58.301436212Z" level=info msg="Forcibly stopping sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\"" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.356 [WARNING][5035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7a9a07dc-fa9f-46a5-a187-3cb9d24e0c6b", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"ea0be5cb28ad14a28131b805c75206412bf1f4c0ce322de02b7c892ff1861bb8", Pod:"coredns-66bc5c9577-9qqz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.84.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali786ed697162", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.357 [INFO][5035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.357 [INFO][5035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" iface="eth0" netns="" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.357 [INFO][5035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.357 [INFO][5035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.398 [INFO][5042] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.398 [INFO][5042] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.398 [INFO][5042] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.413 [WARNING][5042] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.413 [INFO][5042] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" HandleID="k8s-pod-network.b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Workload="srv--liqqm.gb1.brightbox.com-k8s-coredns--66bc5c9577--9qqz8-eth0" Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.415 [INFO][5042] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:58.424600 containerd[1506]: 2025-11-01 02:41:58.421 [INFO][5035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015" Nov 1 02:41:58.424600 containerd[1506]: time="2025-11-01T02:41:58.424204645Z" level=info msg="TearDown network for sandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\" successfully" Nov 1 02:41:58.432106 containerd[1506]: time="2025-11-01T02:41:58.432029592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:41:58.432195 containerd[1506]: time="2025-11-01T02:41:58.432122244Z" level=info msg="RemovePodSandbox \"b097dfdef69c7f32ae9c2de5349bc776e28b093235b4010622124e0164ba2015\" returns successfully" Nov 1 02:41:58.432563 containerd[1506]: time="2025-11-01T02:41:58.432532143Z" level=info msg="StopPodSandbox for \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\"" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.480 [WARNING][5056] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.480 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.480 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" iface="eth0" netns="" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.480 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.480 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.511 [INFO][5063] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.511 [INFO][5063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.511 [INFO][5063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.522 [WARNING][5063] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.522 [INFO][5063] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.524 [INFO][5063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:58.528419 containerd[1506]: 2025-11-01 02:41:58.526 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.529144 containerd[1506]: time="2025-11-01T02:41:58.528501821Z" level=info msg="TearDown network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\" successfully" Nov 1 02:41:58.529144 containerd[1506]: time="2025-11-01T02:41:58.528538726Z" level=info msg="StopPodSandbox for \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\" returns successfully" Nov 1 02:41:58.529367 containerd[1506]: time="2025-11-01T02:41:58.529272360Z" level=info msg="RemovePodSandbox for \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\"" Nov 1 02:41:58.529367 containerd[1506]: time="2025-11-01T02:41:58.529314707Z" level=info msg="Forcibly stopping sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\"" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.587 [WARNING][5078] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.587 [INFO][5078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.587 [INFO][5078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" iface="eth0" netns="" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.587 [INFO][5078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.587 [INFO][5078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.620 [INFO][5085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.621 [INFO][5085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.621 [INFO][5085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.631 [WARNING][5085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.631 [INFO][5085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" HandleID="k8s-pod-network.b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Workload="srv--liqqm.gb1.brightbox.com-k8s-whisker--6bd9f556bc--xfhqg-eth0" Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.633 [INFO][5085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:58.637828 containerd[1506]: 2025-11-01 02:41:58.635 [INFO][5078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67" Nov 1 02:41:58.639160 containerd[1506]: time="2025-11-01T02:41:58.638600368Z" level=info msg="TearDown network for sandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\" successfully" Nov 1 02:41:58.644907 containerd[1506]: time="2025-11-01T02:41:58.644840467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:41:58.645000 containerd[1506]: time="2025-11-01T02:41:58.644940872Z" level=info msg="RemovePodSandbox \"b7434933558dc9603c291768eacd9f3fc46396d08441fc97bacf121c16bd9c67\" returns successfully" Nov 1 02:41:58.645950 containerd[1506]: time="2025-11-01T02:41:58.645725512Z" level=info msg="StopPodSandbox for \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\"" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.740 [WARNING][5099] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"b426dedc-f58e-4d16-a987-3056f24fa4d7", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e", Pod:"calico-apiserver-6f8dc58755-gnqxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0373ae4e081", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.741 [INFO][5099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.741 [INFO][5099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" iface="eth0" netns="" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.741 [INFO][5099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.741 [INFO][5099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.773 [INFO][5107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.774 [INFO][5107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.774 [INFO][5107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.783 [WARNING][5107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.783 [INFO][5107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.786 [INFO][5107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:58.790692 containerd[1506]: 2025-11-01 02:41:58.788 [INFO][5099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.794916 containerd[1506]: time="2025-11-01T02:41:58.790645211Z" level=info msg="TearDown network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\" successfully" Nov 1 02:41:58.795064 containerd[1506]: time="2025-11-01T02:41:58.794917021Z" level=info msg="StopPodSandbox for \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\" returns successfully" Nov 1 02:41:58.795645 containerd[1506]: time="2025-11-01T02:41:58.795559417Z" level=info msg="RemovePodSandbox for \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\"" Nov 1 02:41:58.795645 containerd[1506]: time="2025-11-01T02:41:58.795601999Z" level=info msg="Forcibly stopping sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\"" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.848 [WARNING][5121] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"b426dedc-f58e-4d16-a987-3056f24fa4d7", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"74a2f64b90b9aebfa74387009e854efa188de9ef47eb8d048c7e0b4b38ff7e1e", Pod:"calico-apiserver-6f8dc58755-gnqxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0373ae4e081", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.848 [INFO][5121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.848 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" iface="eth0" netns="" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.848 [INFO][5121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.848 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.878 [INFO][5128] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.878 [INFO][5128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.878 [INFO][5128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.888 [WARNING][5128] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.888 [INFO][5128] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" HandleID="k8s-pod-network.faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--gnqxq-eth0" Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.890 [INFO][5128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:58.894732 containerd[1506]: 2025-11-01 02:41:58.892 [INFO][5121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38" Nov 1 02:41:58.897188 containerd[1506]: time="2025-11-01T02:41:58.894780750Z" level=info msg="TearDown network for sandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\" successfully" Nov 1 02:41:58.898580 containerd[1506]: time="2025-11-01T02:41:58.898535933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:41:58.898929 containerd[1506]: time="2025-11-01T02:41:58.898611396Z" level=info msg="RemovePodSandbox \"faced5e57627bed3afde42263f654210c6dfacbbca14ed10280f9236eebe5f38\" returns successfully" Nov 1 02:41:58.899740 containerd[1506]: time="2025-11-01T02:41:58.899680817Z" level=info msg="StopPodSandbox for \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\"" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.955 [WARNING][5142] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db811c5-1134-445e-9e39-ac0e7ee1b427", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96", Pod:"csi-node-driver-gvm6v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibffa4db4070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.955 [INFO][5142] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.955 [INFO][5142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" iface="eth0" netns="" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.955 [INFO][5142] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.955 [INFO][5142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.986 [INFO][5149] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.986 [INFO][5149] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.986 [INFO][5149] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.996 [WARNING][5149] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.996 [INFO][5149] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:58.998 [INFO][5149] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:59.002789 containerd[1506]: 2025-11-01 02:41:59.000 [INFO][5142] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.002789 containerd[1506]: time="2025-11-01T02:41:59.002594680Z" level=info msg="TearDown network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\" successfully" Nov 1 02:41:59.002789 containerd[1506]: time="2025-11-01T02:41:59.002643945Z" level=info msg="StopPodSandbox for \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\" returns successfully" Nov 1 02:41:59.003707 containerd[1506]: time="2025-11-01T02:41:59.003317878Z" level=info msg="RemovePodSandbox for \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\"" Nov 1 02:41:59.003707 containerd[1506]: time="2025-11-01T02:41:59.003376852Z" level=info msg="Forcibly stopping sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\"" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.101 [WARNING][5163] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2db811c5-1134-445e-9e39-ac0e7ee1b427", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"9ed979c9da811954b62984587ee58dc14bd71050fc24c0b7179a378b03b5ba96", Pod:"csi-node-driver-gvm6v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.84.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibffa4db4070", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.101 [INFO][5163] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.103 [INFO][5163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" iface="eth0" netns="" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.103 [INFO][5163] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.103 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.144 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.145 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.145 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.154 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.154 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" HandleID="k8s-pod-network.2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Workload="srv--liqqm.gb1.brightbox.com-k8s-csi--node--driver--gvm6v-eth0" Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.156 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:59.160683 containerd[1506]: 2025-11-01 02:41:59.158 [INFO][5163] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7" Nov 1 02:41:59.162862 containerd[1506]: time="2025-11-01T02:41:59.161642511Z" level=info msg="TearDown network for sandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\" successfully" Nov 1 02:41:59.166175 containerd[1506]: time="2025-11-01T02:41:59.165968500Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:41:59.166175 containerd[1506]: time="2025-11-01T02:41:59.166051078Z" level=info msg="RemovePodSandbox \"2f10ac3faae27d777ecc3ac6720ea46e454eeabfc861b32f6188bc23a33dc2d7\" returns successfully" Nov 1 02:41:59.167030 containerd[1506]: time="2025-11-01T02:41:59.166984584Z" level=info msg="StopPodSandbox for \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\"" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.216 [WARNING][5186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1c04a34-b552-49b3-a9dc-198853e53df9", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5", Pod:"calico-apiserver-6f8dc58755-bvnkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1de657e6e56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.216 [INFO][5186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.216 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" iface="eth0" netns="" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.216 [INFO][5186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.216 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.251 [INFO][5194] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.252 [INFO][5194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.252 [INFO][5194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.263 [WARNING][5194] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.263 [INFO][5194] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.265 [INFO][5194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:59.270645 containerd[1506]: 2025-11-01 02:41:59.267 [INFO][5186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.270645 containerd[1506]: time="2025-11-01T02:41:59.270030821Z" level=info msg="TearDown network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\" successfully" Nov 1 02:41:59.270645 containerd[1506]: time="2025-11-01T02:41:59.270075715Z" level=info msg="StopPodSandbox for \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\" returns successfully" Nov 1 02:41:59.274519 containerd[1506]: time="2025-11-01T02:41:59.272222443Z" level=info msg="RemovePodSandbox for \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\"" Nov 1 02:41:59.274519 containerd[1506]: time="2025-11-01T02:41:59.272270983Z" level=info msg="Forcibly stopping sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\"" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.333 [WARNING][5208] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0", GenerateName:"calico-apiserver-6f8dc58755-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1c04a34-b552-49b3-a9dc-198853e53df9", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f8dc58755", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"5a20732c7d474673081804fdf64034f452840e16cd38b85ab29a661767e8f1f5", Pod:"calico-apiserver-6f8dc58755-bvnkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.84.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1de657e6e56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.333 [INFO][5208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.333 [INFO][5208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" iface="eth0" netns="" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.334 [INFO][5208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.334 [INFO][5208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.365 [INFO][5215] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.366 [INFO][5215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.366 [INFO][5215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.375 [WARNING][5215] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.375 [INFO][5215] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" HandleID="k8s-pod-network.1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--apiserver--6f8dc58755--bvnkc-eth0" Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.377 [INFO][5215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:59.381048 containerd[1506]: 2025-11-01 02:41:59.379 [INFO][5208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23" Nov 1 02:41:59.382170 containerd[1506]: time="2025-11-01T02:41:59.381101134Z" level=info msg="TearDown network for sandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\" successfully" Nov 1 02:41:59.386099 containerd[1506]: time="2025-11-01T02:41:59.386055780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:41:59.386324 containerd[1506]: time="2025-11-01T02:41:59.386118478Z" level=info msg="RemovePodSandbox \"1bfcd9a83fe351cb6e8fa0cc25f70ddc0da081615ecd676f1463c493312b0b23\" returns successfully" Nov 1 02:41:59.387688 containerd[1506]: time="2025-11-01T02:41:59.387270193Z" level=info msg="StopPodSandbox for \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\"" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.451 [WARNING][5229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0", GenerateName:"calico-kube-controllers-754cd6d684-", Namespace:"calico-system", SelfLink:"", UID:"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754cd6d684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a", Pod:"calico-kube-controllers-754cd6d684-rxmrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791b4a20c49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.452 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.452 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" iface="eth0" netns="" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.452 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.452 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.484 [INFO][5236] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.484 [INFO][5236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.485 [INFO][5236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.498 [WARNING][5236] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.498 [INFO][5236] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.500 [INFO][5236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:59.506295 containerd[1506]: 2025-11-01 02:41:59.502 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.506295 containerd[1506]: time="2025-11-01T02:41:59.505723647Z" level=info msg="TearDown network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\" successfully" Nov 1 02:41:59.506295 containerd[1506]: time="2025-11-01T02:41:59.505761333Z" level=info msg="StopPodSandbox for \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\" returns successfully" Nov 1 02:41:59.507227 containerd[1506]: time="2025-11-01T02:41:59.506414608Z" level=info msg="RemovePodSandbox for \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\"" Nov 1 02:41:59.507227 containerd[1506]: time="2025-11-01T02:41:59.506479768Z" level=info msg="Forcibly stopping sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\"" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.561 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0", GenerateName:"calico-kube-controllers-754cd6d684-", Namespace:"calico-system", SelfLink:"", UID:"efbd1db3-4d1b-4800-b03d-ce570a8bfb0d", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"754cd6d684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"df58e45e4600d315f7c195f1e511239581a08cf618e53d77db03f7fdd2dd6d8a", Pod:"calico-kube-controllers-754cd6d684-rxmrt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.84.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali791b4a20c49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.562 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.562 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" iface="eth0" netns="" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.562 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.562 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.598 [INFO][5258] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.599 [INFO][5258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.599 [INFO][5258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.609 [WARNING][5258] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.609 [INFO][5258] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" HandleID="k8s-pod-network.060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Workload="srv--liqqm.gb1.brightbox.com-k8s-calico--kube--controllers--754cd6d684--rxmrt-eth0" Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.611 [INFO][5258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:41:59.617505 containerd[1506]: 2025-11-01 02:41:59.613 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732" Nov 1 02:41:59.617505 containerd[1506]: time="2025-11-01T02:41:59.615820167Z" level=info msg="TearDown network for sandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\" successfully" Nov 1 02:41:59.620222 containerd[1506]: time="2025-11-01T02:41:59.620065064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:41:59.620222 containerd[1506]: time="2025-11-01T02:41:59.620132239Z" level=info msg="RemovePodSandbox \"060e09dcf02197dd7033544a46ae72b7c1dc5806f418e7c96b51ac6acbd48732\" returns successfully" Nov 1 02:42:02.857540 containerd[1506]: time="2025-11-01T02:42:02.857464942Z" level=info msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\"" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:02.958 [INFO][5281] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:02.958 [INFO][5281] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" iface="eth0" netns="/var/run/netns/cni-8343adfb-2cd8-7496-bd9e-d108210ca2fd" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:02.962 [INFO][5281] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" iface="eth0" netns="/var/run/netns/cni-8343adfb-2cd8-7496-bd9e-d108210ca2fd" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:02.962 [INFO][5281] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" iface="eth0" netns="/var/run/netns/cni-8343adfb-2cd8-7496-bd9e-d108210ca2fd" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:02.962 [INFO][5281] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:02.962 [INFO][5281] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:03.007 [INFO][5288] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:03.008 [INFO][5288] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:03.008 [INFO][5288] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:03.020 [WARNING][5288] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:03.020 [INFO][5288] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:03.022 [INFO][5288] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:42:03.027664 containerd[1506]: 2025-11-01 02:42:03.025 [INFO][5281] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:03.031283 containerd[1506]: time="2025-11-01T02:42:03.029605216Z" level=info msg="TearDown network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" successfully" Nov 1 02:42:03.031283 containerd[1506]: time="2025-11-01T02:42:03.029660756Z" level=info msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" returns successfully" Nov 1 02:42:03.035805 containerd[1506]: time="2025-11-01T02:42:03.035769426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t726m,Uid:e1215547-a56a-4c57-957b-4ea0376bfb33,Namespace:calico-system,Attempt:1,}" Nov 1 02:42:03.037212 systemd[1]: run-netns-cni\x2d8343adfb\x2d2cd8\x2d7496\x2dbd9e\x2dd108210ca2fd.mount: Deactivated successfully. Nov 1 02:42:03.230662 systemd-networkd[1431]: cali75ec25b8629: Link UP Nov 1 02:42:03.231081 systemd-networkd[1431]: cali75ec25b8629: Gained carrier Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.108 [INFO][5299] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0 goldmane-7c778bb748- calico-system e1215547-a56a-4c57-957b-4ea0376bfb33 1100 0 2025-11-01 02:41:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-liqqm.gb1.brightbox.com goldmane-7c778bb748-t726m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali75ec25b8629 [] [] }} ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.108 [INFO][5299] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.163 [INFO][5306] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" HandleID="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.163 [INFO][5306] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" HandleID="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4a50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-liqqm.gb1.brightbox.com", "pod":"goldmane-7c778bb748-t726m", "timestamp":"2025-11-01 02:42:03.16346051 +0000 UTC"}, Hostname:"srv-liqqm.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.163 [INFO][5306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.163 [INFO][5306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.164 [INFO][5306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-liqqm.gb1.brightbox.com' Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.176 [INFO][5306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.184 [INFO][5306] ipam/ipam.go 394: Looking up existing affinities for host host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.192 [INFO][5306] ipam/ipam.go 511: Trying affinity for 192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.195 [INFO][5306] ipam/ipam.go 158: Attempting to load block cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.198 [INFO][5306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.84.0/26 host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.198 [INFO][5306] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.84.0/26 handle="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.200 [INFO][5306] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545 Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.206 [INFO][5306] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.84.0/26 handle="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.217 [INFO][5306] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.84.8/26] block=192.168.84.0/26 handle="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.217 [INFO][5306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.84.8/26] handle="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" host="srv-liqqm.gb1.brightbox.com" Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.217 [INFO][5306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:42:03.257509 containerd[1506]: 2025-11-01 02:42:03.217 [INFO][5306] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.84.8/26] IPv6=[] ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" HandleID="k8s-pod-network.f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.259202 containerd[1506]: 2025-11-01 02:42:03.221 [INFO][5299] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e1215547-a56a-4c57-957b-4ea0376bfb33", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7c778bb748-t726m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75ec25b8629", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:42:03.259202 containerd[1506]: 2025-11-01 02:42:03.221 [INFO][5299] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.84.8/32] ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.259202 containerd[1506]: 2025-11-01 02:42:03.222 [INFO][5299] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75ec25b8629 ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.259202 containerd[1506]: 2025-11-01 02:42:03.230 [INFO][5299] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.259202 containerd[1506]: 2025-11-01 02:42:03.231 [INFO][5299] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e1215547-a56a-4c57-957b-4ea0376bfb33", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545", Pod:"goldmane-7c778bb748-t726m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75ec25b8629", MAC:"36:e2:bf:67:d4:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:42:03.259202 containerd[1506]: 2025-11-01 02:42:03.250 [INFO][5299] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545" Namespace="calico-system" Pod="goldmane-7c778bb748-t726m" WorkloadEndpoint="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:03.335858 containerd[1506]: time="2025-11-01T02:42:03.334421708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:42:03.335858 containerd[1506]: time="2025-11-01T02:42:03.335673013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:42:03.335858 containerd[1506]: time="2025-11-01T02:42:03.335692656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:42:03.338013 containerd[1506]: time="2025-11-01T02:42:03.337619163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:42:03.422536 systemd[1]: Started cri-containerd-f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545.scope - libcontainer container f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545. Nov 1 02:42:03.499287 containerd[1506]: time="2025-11-01T02:42:03.499078457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-t726m,Uid:e1215547-a56a-4c57-957b-4ea0376bfb33,Namespace:calico-system,Attempt:1,} returns sandbox id \"f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545\"" Nov 1 02:42:03.504151 containerd[1506]: time="2025-11-01T02:42:03.503944129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 02:42:03.811293 containerd[1506]: time="2025-11-01T02:42:03.811066609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:03.813845 containerd[1506]: time="2025-11-01T02:42:03.813772225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 02:42:03.814720 containerd[1506]: time="2025-11-01T02:42:03.813937660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 02:42:03.814889 kubelet[2684]: E1101 02:42:03.814238 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:42:03.814889 kubelet[2684]: E1101 02:42:03.814337 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:42:03.814889 kubelet[2684]: E1101 02:42:03.814547 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t726m_calico-system(e1215547-a56a-4c57-957b-4ea0376bfb33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:03.814889 kubelet[2684]: E1101 02:42:03.814624 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:42:04.032103 systemd[1]: run-containerd-runc-k8s.io-f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545-runc.Z6lEFR.mount: Deactivated successfully. Nov 1 02:42:04.469916 kubelet[2684]: E1101 02:42:04.469591 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:42:05.158744 systemd-networkd[1431]: cali75ec25b8629: Gained IPv6LL Nov 1 02:42:05.472248 kubelet[2684]: E1101 02:42:05.472025 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:42:05.861766 containerd[1506]: time="2025-11-01T02:42:05.859664436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:42:06.172890 containerd[1506]: time="2025-11-01T02:42:06.172244032Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:06.173792 containerd[1506]: time="2025-11-01T02:42:06.173679503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:42:06.174019 containerd[1506]: time="2025-11-01T02:42:06.173750537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:42:06.174224 kubelet[2684]: E1101 02:42:06.174146 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:06.174224 kubelet[2684]: E1101 02:42:06.174218 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:06.174372 kubelet[2684]: E1101 02:42:06.174326 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-bvnkc_calico-apiserver(a1c04a34-b552-49b3-a9dc-198853e53df9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:06.174473 kubelet[2684]: E1101 02:42:06.174392 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:42:06.860558 containerd[1506]: time="2025-11-01T02:42:06.860494496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 02:42:07.194792 containerd[1506]: time="2025-11-01T02:42:07.194299467Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:07.195916 containerd[1506]: time="2025-11-01T02:42:07.195821240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 02:42:07.196077 containerd[1506]: time="2025-11-01T02:42:07.195836301Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 02:42:07.197398 kubelet[2684]: E1101 02:42:07.196498 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:42:07.197398 kubelet[2684]: E1101 02:42:07.196592 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:42:07.197398 kubelet[2684]: E1101 02:42:07.196718 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:07.199418 containerd[1506]: time="2025-11-01T02:42:07.199107841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 02:42:07.534560 containerd[1506]: time="2025-11-01T02:42:07.534354552Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:07.536606 containerd[1506]: time="2025-11-01T02:42:07.536478324Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 02:42:07.536606 containerd[1506]: time="2025-11-01T02:42:07.536532707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 02:42:07.536851 kubelet[2684]: E1101 02:42:07.536794 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:42:07.536951 kubelet[2684]: E1101 02:42:07.536864 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:42:07.537029 kubelet[2684]: E1101 02:42:07.536999 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:07.537120 kubelet[2684]: E1101 02:42:07.537075 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:42:10.858542 containerd[1506]: time="2025-11-01T02:42:10.858375292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 02:42:11.178638 containerd[1506]: time="2025-11-01T02:42:11.178237883Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:11.179751 containerd[1506]: time="2025-11-01T02:42:11.179549323Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 02:42:11.179751 containerd[1506]: time="2025-11-01T02:42:11.179552696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 02:42:11.179957 kubelet[2684]: E1101 02:42:11.179891 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:42:11.180587 kubelet[2684]: E1101 02:42:11.179973 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:42:11.180587 kubelet[2684]: E1101 02:42:11.180216 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-754cd6d684-rxmrt_calico-system(efbd1db3-4d1b-4800-b03d-ce570a8bfb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:11.180587 kubelet[2684]: E1101 02:42:11.180287 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:42:11.181127 containerd[1506]: time="2025-11-01T02:42:11.181088487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:42:11.489880 containerd[1506]: time="2025-11-01T02:42:11.489684779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:11.498934 containerd[1506]: time="2025-11-01T02:42:11.498761020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:42:11.498934 containerd[1506]: time="2025-11-01T02:42:11.498863173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:42:11.499229 kubelet[2684]: E1101 02:42:11.499106 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:11.499229 kubelet[2684]: E1101 02:42:11.499174 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:11.499358 kubelet[2684]: E1101 02:42:11.499274 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-gnqxq_calico-apiserver(b426dedc-f58e-4d16-a987-3056f24fa4d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:11.499420 kubelet[2684]: E1101 02:42:11.499331 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:42:11.861509 containerd[1506]: time="2025-11-01T02:42:11.861021727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 02:42:12.171645 containerd[1506]: time="2025-11-01T02:42:12.171377245Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:12.179124 containerd[1506]: time="2025-11-01T02:42:12.179052303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 02:42:12.179290 containerd[1506]: time="2025-11-01T02:42:12.179088584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 02:42:12.179411 kubelet[2684]: E1101 02:42:12.179347 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:42:12.179514 kubelet[2684]: E1101 02:42:12.179414 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:42:12.179592 kubelet[2684]: E1101 02:42:12.179547 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:12.181379 containerd[1506]: time="2025-11-01T02:42:12.181317398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 02:42:12.491581 containerd[1506]: time="2025-11-01T02:42:12.491309014Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:12.493404 containerd[1506]: time="2025-11-01T02:42:12.492752447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 02:42:12.493404 containerd[1506]: time="2025-11-01T02:42:12.492796720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 02:42:12.493585 kubelet[2684]: E1101 02:42:12.493011 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:42:12.493585 kubelet[2684]: E1101 02:42:12.493063 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:42:12.493585 kubelet[2684]: E1101 02:42:12.493175 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:12.503722 kubelet[2684]: E1101 02:42:12.493272 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:42:18.858099 containerd[1506]: time="2025-11-01T02:42:18.857731242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 02:42:19.184000 containerd[1506]: time="2025-11-01T02:42:19.183811226Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:19.186150 containerd[1506]: time="2025-11-01T02:42:19.185997958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 02:42:19.186150 containerd[1506]: time="2025-11-01T02:42:19.186062504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 02:42:19.186410 kubelet[2684]: E1101 02:42:19.186319 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:42:19.188819 kubelet[2684]: E1101 02:42:19.186421 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:42:19.188819 kubelet[2684]: E1101 02:42:19.186600 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t726m_calico-system(e1215547-a56a-4c57-957b-4ea0376bfb33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:19.188819 kubelet[2684]: E1101 02:42:19.186672 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:42:20.858646 kubelet[2684]: E1101 02:42:20.858520 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:42:22.858146 kubelet[2684]: E1101 02:42:22.857919 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:42:22.858146 kubelet[2684]: E1101 02:42:22.857991 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:42:22.859668 kubelet[2684]: E1101 02:42:22.859574 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:42:23.862513 kubelet[2684]: E1101 02:42:23.861215 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:42:29.968344 systemd[1]: Started sshd@9-10.230.26.18:22-147.75.109.163:41674.service - OpenSSH per-connection server daemon (147.75.109.163:41674). Nov 1 02:42:30.939492 sshd[5414]: Accepted publickey for core from 147.75.109.163 port 41674 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:42:30.944263 sshd[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:42:30.959560 systemd-logind[1490]: New session 12 of user core. Nov 1 02:42:30.966734 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 02:42:32.436886 sshd[5414]: pam_unix(sshd:session): session closed for user core Nov 1 02:42:32.447974 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Nov 1 02:42:32.450189 systemd[1]: sshd@9-10.230.26.18:22-147.75.109.163:41674.service: Deactivated successfully. Nov 1 02:42:32.457414 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 02:42:32.464420 systemd-logind[1490]: Removed session 12. Nov 1 02:42:33.860509 kubelet[2684]: E1101 02:42:33.859722 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:42:34.861509 containerd[1506]: time="2025-11-01T02:42:34.860693435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 02:42:35.189076 containerd[1506]: time="2025-11-01T02:42:35.188657460Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:35.191096 containerd[1506]: time="2025-11-01T02:42:35.190577471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 02:42:35.191096 containerd[1506]: time="2025-11-01T02:42:35.190894861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 02:42:35.191395 kubelet[2684]: E1101 02:42:35.191308 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:42:35.193992 kubelet[2684]: E1101 02:42:35.191419 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:42:35.193992 kubelet[2684]: E1101 02:42:35.192300 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-754cd6d684-rxmrt_calico-system(efbd1db3-4d1b-4800-b03d-ce570a8bfb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:35.193992 kubelet[2684]: E1101 02:42:35.192376 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:42:35.195128 containerd[1506]: time="2025-11-01T02:42:35.191944038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:42:35.507780 containerd[1506]: time="2025-11-01T02:42:35.507594923Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:35.509597 containerd[1506]: time="2025-11-01T02:42:35.509545757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:42:35.509728 containerd[1506]: time="2025-11-01T02:42:35.509664752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:42:35.510050 kubelet[2684]: E1101 02:42:35.509983 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:35.510147 kubelet[2684]: E1101 02:42:35.510065 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:35.511990 kubelet[2684]: E1101 02:42:35.511949 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-bvnkc_calico-apiserver(a1c04a34-b552-49b3-a9dc-198853e53df9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:35.512365 kubelet[2684]: E1101 02:42:35.512016 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:42:35.866623 containerd[1506]: time="2025-11-01T02:42:35.866486056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 02:42:36.194059 containerd[1506]: time="2025-11-01T02:42:36.193793332Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:36.196797 containerd[1506]: time="2025-11-01T02:42:36.196703993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 02:42:36.196985 containerd[1506]: time="2025-11-01T02:42:36.196899415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 02:42:36.197295 kubelet[2684]: E1101 02:42:36.197219 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:42:36.198083 kubelet[2684]: E1101 02:42:36.197309 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:42:36.198083 kubelet[2684]: E1101 02:42:36.197485 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:36.201685 containerd[1506]: time="2025-11-01T02:42:36.201338153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 02:42:36.514210 containerd[1506]: time="2025-11-01T02:42:36.513599261Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:36.519239 containerd[1506]: time="2025-11-01T02:42:36.518959191Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 02:42:36.519239 containerd[1506]: time="2025-11-01T02:42:36.519130289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 02:42:36.520867 kubelet[2684]: E1101 02:42:36.520480 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:42:36.521544 kubelet[2684]: E1101 02:42:36.520826 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:42:36.522007 kubelet[2684]: E1101 02:42:36.521694 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:36.522455 kubelet[2684]: E1101 02:42:36.522338 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:42:36.863330 containerd[1506]: time="2025-11-01T02:42:36.863016601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:42:37.182966 containerd[1506]: time="2025-11-01T02:42:37.182789282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:37.184728 containerd[1506]: time="2025-11-01T02:42:37.184671306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:42:37.185618 containerd[1506]: time="2025-11-01T02:42:37.184699356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:42:37.185726 kubelet[2684]: E1101 02:42:37.185104 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:37.185726 kubelet[2684]: E1101 02:42:37.185221 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:42:37.185726 kubelet[2684]: E1101 02:42:37.185365 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-gnqxq_calico-apiserver(b426dedc-f58e-4d16-a987-3056f24fa4d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:37.185726 kubelet[2684]: E1101 02:42:37.185429 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:42:37.607280 systemd[1]: Started sshd@10-10.230.26.18:22-147.75.109.163:46342.service - OpenSSH per-connection server daemon (147.75.109.163:46342). Nov 1 02:42:37.860357 containerd[1506]: time="2025-11-01T02:42:37.860209181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 02:42:38.182192 containerd[1506]: time="2025-11-01T02:42:38.181456490Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:38.183702 containerd[1506]: time="2025-11-01T02:42:38.183478779Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 02:42:38.183702 containerd[1506]: time="2025-11-01T02:42:38.183634612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 02:42:38.185103 kubelet[2684]: E1101 02:42:38.184518 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:42:38.185103 kubelet[2684]: E1101 02:42:38.184595 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:42:38.185103 kubelet[2684]: E1101 02:42:38.184720 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:38.186963 containerd[1506]: time="2025-11-01T02:42:38.186479498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 02:42:38.507498 containerd[1506]: time="2025-11-01T02:42:38.506212763Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:38.508563 containerd[1506]: time="2025-11-01T02:42:38.508285467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 02:42:38.508563 containerd[1506]: time="2025-11-01T02:42:38.508424380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 02:42:38.508997 kubelet[2684]: E1101 02:42:38.508731 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:42:38.508997 kubelet[2684]: E1101 02:42:38.508803 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:42:38.508997 kubelet[2684]: E1101 02:42:38.508956 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:38.509296 kubelet[2684]: E1101 02:42:38.509042 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:42:38.569253 sshd[5440]: Accepted publickey for core from 147.75.109.163 port 46342 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:42:38.573327 sshd[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:42:38.587945 systemd-logind[1490]: New session 13 of user core. Nov 1 02:42:38.598916 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 02:42:39.412788 sshd[5440]: pam_unix(sshd:session): session closed for user core Nov 1 02:42:39.418349 systemd[1]: sshd@10-10.230.26.18:22-147.75.109.163:46342.service: Deactivated successfully. Nov 1 02:42:39.425390 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 02:42:39.427417 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Nov 1 02:42:39.429936 systemd-logind[1490]: Removed session 13. Nov 1 02:42:44.559909 systemd[1]: Started sshd@11-10.230.26.18:22-147.75.109.163:36894.service - OpenSSH per-connection server daemon (147.75.109.163:36894). Nov 1 02:42:45.468849 sshd[5456]: Accepted publickey for core from 147.75.109.163 port 36894 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:42:45.474791 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:42:45.490569 systemd-logind[1490]: New session 14 of user core. Nov 1 02:42:45.495681 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 02:42:45.866134 containerd[1506]: time="2025-11-01T02:42:45.865940406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 02:42:46.216245 containerd[1506]: time="2025-11-01T02:42:46.215956241Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:42:46.217645 containerd[1506]: time="2025-11-01T02:42:46.217575702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 02:42:46.217797 containerd[1506]: time="2025-11-01T02:42:46.217614656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 02:42:46.218165 kubelet[2684]: E1101 02:42:46.218025 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:42:46.218165 kubelet[2684]: E1101 02:42:46.218127 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 02:42:46.220252 kubelet[2684]: E1101 02:42:46.220004 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-t726m_calico-system(e1215547-a56a-4c57-957b-4ea0376bfb33): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 02:42:46.220252 kubelet[2684]: E1101 02:42:46.220102 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:42:46.232626 sshd[5456]: pam_unix(sshd:session): session closed for user core Nov 1 02:42:46.244225 systemd[1]: sshd@11-10.230.26.18:22-147.75.109.163:36894.service: Deactivated successfully. Nov 1 02:42:46.251181 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 02:42:46.253862 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Nov 1 02:42:46.256114 systemd-logind[1490]: Removed session 14. Nov 1 02:42:46.396787 systemd[1]: Started sshd@12-10.230.26.18:22-147.75.109.163:36898.service - OpenSSH per-connection server daemon (147.75.109.163:36898). Nov 1 02:42:47.313778 sshd[5470]: Accepted publickey for core from 147.75.109.163 port 36898 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:42:47.317850 sshd[5470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:42:47.328072 systemd-logind[1490]: New session 15 of user core. Nov 1 02:42:47.336228 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 02:42:47.865891 kubelet[2684]: E1101 02:42:47.865779 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:42:47.870501 kubelet[2684]: E1101 02:42:47.869835 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:42:48.215184 sshd[5470]: pam_unix(sshd:session): session closed for user core Nov 1 02:42:48.231805 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Nov 1 02:42:48.232717 systemd[1]: sshd@12-10.230.26.18:22-147.75.109.163:36898.service: Deactivated successfully. Nov 1 02:42:48.237645 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 02:42:48.242397 systemd-logind[1490]: Removed session 15. Nov 1 02:42:48.392149 systemd[1]: Started sshd@13-10.230.26.18:22-147.75.109.163:36904.service - OpenSSH per-connection server daemon (147.75.109.163:36904). Nov 1 02:42:49.345949 sshd[5480]: Accepted publickey for core from 147.75.109.163 port 36904 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:42:49.350238 sshd[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:42:49.364543 systemd-logind[1490]: New session 16 of user core. Nov 1 02:42:49.370754 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 02:42:50.125958 sshd[5480]: pam_unix(sshd:session): session closed for user core Nov 1 02:42:50.131978 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Nov 1 02:42:50.133743 systemd[1]: sshd@13-10.230.26.18:22-147.75.109.163:36904.service: Deactivated successfully. Nov 1 02:42:50.139411 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 02:42:50.144109 systemd-logind[1490]: Removed session 16. Nov 1 02:42:50.861429 kubelet[2684]: E1101 02:42:50.860577 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:42:50.861429 kubelet[2684]: E1101 02:42:50.861217 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:42:51.864499 kubelet[2684]: E1101 02:42:51.863476 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:42:55.292902 systemd[1]: Started sshd@14-10.230.26.18:22-147.75.109.163:38728.service - OpenSSH per-connection server daemon (147.75.109.163:38728). Nov 1 02:42:56.230435 sshd[5513]: Accepted publickey for core from 147.75.109.163 port 38728 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:42:56.233344 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:42:56.244709 systemd-logind[1490]: New session 17 of user core. Nov 1 02:42:56.252766 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 02:42:56.863202 kubelet[2684]: E1101 02:42:56.863034 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:42:57.051565 sshd[5513]: pam_unix(sshd:session): session closed for user core Nov 1 02:42:57.059559 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Nov 1 02:42:57.060827 systemd[1]: sshd@14-10.230.26.18:22-147.75.109.163:38728.service: Deactivated successfully. Nov 1 02:42:57.066893 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 02:42:57.070394 systemd-logind[1490]: Removed session 17. Nov 1 02:42:58.860518 kubelet[2684]: E1101 02:42:58.860406 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:42:59.627920 containerd[1506]: time="2025-11-01T02:42:59.627786286Z" level=info msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\"" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.831 [WARNING][5540] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e1215547-a56a-4c57-957b-4ea0376bfb33", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545", Pod:"goldmane-7c778bb748-t726m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75ec25b8629", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.832 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.832 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" iface="eth0" netns="" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.832 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.832 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.906 [INFO][5547] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.906 [INFO][5547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.907 [INFO][5547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.918 [WARNING][5547] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.918 [INFO][5547] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.923 [INFO][5547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:42:59.932747 containerd[1506]: 2025-11-01 02:42:59.928 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:42:59.936681 containerd[1506]: time="2025-11-01T02:42:59.932771765Z" level=info msg="TearDown network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" successfully" Nov 1 02:42:59.936681 containerd[1506]: time="2025-11-01T02:42:59.932811919Z" level=info msg="StopPodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" returns successfully" Nov 1 02:42:59.936681 containerd[1506]: time="2025-11-01T02:42:59.934066109Z" level=info msg="RemovePodSandbox for \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\"" Nov 1 02:42:59.936681 containerd[1506]: time="2025-11-01T02:42:59.934135040Z" level=info msg="Forcibly stopping sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\"" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.042 [WARNING][5561] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"e1215547-a56a-4c57-957b-4ea0376bfb33", ResourceVersion:"1427", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 2, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-liqqm.gb1.brightbox.com", ContainerID:"f12e79332a01391e07707609ca5ea9dccb586d886c809a4fd8457431a670f545", Pod:"goldmane-7c778bb748-t726m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.84.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali75ec25b8629", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.043 [INFO][5561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.043 [INFO][5561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" iface="eth0" netns="" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.043 [INFO][5561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.043 [INFO][5561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.091 [INFO][5568] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.091 [INFO][5568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.091 [INFO][5568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.101 [WARNING][5568] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.101 [INFO][5568] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" HandleID="k8s-pod-network.e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Workload="srv--liqqm.gb1.brightbox.com-k8s-goldmane--7c778bb748--t726m-eth0" Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.103 [INFO][5568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 02:43:00.109516 containerd[1506]: 2025-11-01 02:43:00.105 [INFO][5561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c" Nov 1 02:43:00.109516 containerd[1506]: time="2025-11-01T02:43:00.108988017Z" level=info msg="TearDown network for sandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" successfully" Nov 1 02:43:00.121780 containerd[1506]: time="2025-11-01T02:43:00.121711100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 02:43:00.121921 containerd[1506]: time="2025-11-01T02:43:00.121810552Z" level=info msg="RemovePodSandbox \"e7264480ab19830251769f9e89d5bd385d97f7d0b6027c55fc9820fe6637fb3c\" returns successfully" Nov 1 02:43:02.232254 systemd[1]: Started sshd@15-10.230.26.18:22-147.75.109.163:36416.service - OpenSSH per-connection server daemon (147.75.109.163:36416). Nov 1 02:43:02.857834 kubelet[2684]: E1101 02:43:02.857744 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:43:02.862212 kubelet[2684]: E1101 02:43:02.862081 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:43:03.213257 sshd[5575]: Accepted publickey for core from 147.75.109.163 port 36416 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:03.218137 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:03.235547 systemd-logind[1490]: New session 18 of user core. Nov 1 02:43:03.239944 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 02:43:03.861843 kubelet[2684]: E1101 02:43:03.861721 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:43:03.868950 kubelet[2684]: E1101 02:43:03.868868 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:43:04.111101 sshd[5575]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:04.118565 systemd[1]: sshd@15-10.230.26.18:22-147.75.109.163:36416.service: Deactivated successfully. Nov 1 02:43:04.125549 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 02:43:04.127869 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Nov 1 02:43:04.129388 systemd-logind[1490]: Removed session 18. Nov 1 02:43:09.286594 systemd[1]: Started sshd@16-10.230.26.18:22-147.75.109.163:36430.service - OpenSSH per-connection server daemon (147.75.109.163:36430). Nov 1 02:43:10.216281 sshd[5591]: Accepted publickey for core from 147.75.109.163 port 36430 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:10.218906 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:10.234713 systemd-logind[1490]: New session 19 of user core. Nov 1 02:43:10.240513 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 02:43:11.012709 sshd[5591]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:11.022407 systemd[1]: sshd@16-10.230.26.18:22-147.75.109.163:36430.service: Deactivated successfully. Nov 1 02:43:11.027000 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 02:43:11.029035 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Nov 1 02:43:11.031932 systemd-logind[1490]: Removed session 19. Nov 1 02:43:11.176788 systemd[1]: Started sshd@17-10.230.26.18:22-147.75.109.163:42988.service - OpenSSH per-connection server daemon (147.75.109.163:42988). Nov 1 02:43:11.862061 kubelet[2684]: E1101 02:43:11.860805 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:43:12.099877 sshd[5604]: Accepted publickey for core from 147.75.109.163 port 42988 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:12.103739 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:12.114041 systemd-logind[1490]: New session 20 of user core. Nov 1 02:43:12.125736 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 02:43:13.182243 sshd[5604]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:13.193082 systemd[1]: sshd@17-10.230.26.18:22-147.75.109.163:42988.service: Deactivated successfully. Nov 1 02:43:13.198658 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 02:43:13.203125 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Nov 1 02:43:13.206530 systemd-logind[1490]: Removed session 20. Nov 1 02:43:13.340500 systemd[1]: Started sshd@18-10.230.26.18:22-147.75.109.163:43000.service - OpenSSH per-connection server daemon (147.75.109.163:43000). Nov 1 02:43:13.881101 kubelet[2684]: E1101 02:43:13.881003 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:43:14.278249 sshd[5616]: Accepted publickey for core from 147.75.109.163 port 43000 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:14.284612 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:14.298768 systemd-logind[1490]: New session 21 of user core. Nov 1 02:43:14.304764 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 02:43:14.868299 kubelet[2684]: E1101 02:43:14.868000 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:43:16.044677 sshd[5616]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:16.052723 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Nov 1 02:43:16.055978 systemd[1]: sshd@18-10.230.26.18:22-147.75.109.163:43000.service: Deactivated successfully. Nov 1 02:43:16.063348 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 02:43:16.066850 systemd-logind[1490]: Removed session 21. Nov 1 02:43:16.210605 systemd[1]: Started sshd@19-10.230.26.18:22-147.75.109.163:43016.service - OpenSSH per-connection server daemon (147.75.109.163:43016). Nov 1 02:43:17.140524 sshd[5632]: Accepted publickey for core from 147.75.109.163 port 43016 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:17.143563 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:17.151177 systemd-logind[1490]: New session 22 of user core. Nov 1 02:43:17.159386 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 02:43:17.888299 containerd[1506]: time="2025-11-01T02:43:17.888176341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:43:18.257944 containerd[1506]: time="2025-11-01T02:43:18.257188710Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:43:18.260382 sshd[5632]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:18.262707 containerd[1506]: time="2025-11-01T02:43:18.262614153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:43:18.263315 containerd[1506]: time="2025-11-01T02:43:18.262645644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:43:18.264094 kubelet[2684]: E1101 02:43:18.263056 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:43:18.266182 kubelet[2684]: E1101 02:43:18.264664 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:43:18.267700 containerd[1506]: time="2025-11-01T02:43:18.266324791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 02:43:18.273292 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Nov 1 02:43:18.275305 systemd[1]: sshd@19-10.230.26.18:22-147.75.109.163:43016.service: Deactivated successfully. Nov 1 02:43:18.281935 kubelet[2684]: E1101 02:43:18.278195 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-bvnkc_calico-apiserver(a1c04a34-b552-49b3-a9dc-198853e53df9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:43:18.281935 kubelet[2684]: E1101 02:43:18.278287 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:43:18.284922 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 02:43:18.289179 systemd-logind[1490]: Removed session 22. Nov 1 02:43:18.426708 systemd[1]: Started sshd@20-10.230.26.18:22-147.75.109.163:43030.service - OpenSSH per-connection server daemon (147.75.109.163:43030). Nov 1 02:43:18.594205 containerd[1506]: time="2025-11-01T02:43:18.594135370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:43:18.595600 containerd[1506]: time="2025-11-01T02:43:18.595525569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 02:43:18.595703 containerd[1506]: time="2025-11-01T02:43:18.595655929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 02:43:18.596437 kubelet[2684]: E1101 02:43:18.596317 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:43:18.596551 kubelet[2684]: E1101 02:43:18.596499 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 02:43:18.597190 kubelet[2684]: E1101 02:43:18.597103 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-754cd6d684-rxmrt_calico-system(efbd1db3-4d1b-4800-b03d-ce570a8bfb0d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 02:43:18.597324 kubelet[2684]: E1101 02:43:18.597220 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:43:18.598931 containerd[1506]: time="2025-11-01T02:43:18.598876824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 02:43:18.918289 containerd[1506]: time="2025-11-01T02:43:18.917754544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:43:18.920907 containerd[1506]: time="2025-11-01T02:43:18.918959404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 02:43:18.920907 containerd[1506]: time="2025-11-01T02:43:18.919039187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 02:43:18.921033 kubelet[2684]: E1101 02:43:18.919299 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:43:18.921033 kubelet[2684]: E1101 02:43:18.919380 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 02:43:18.921033 kubelet[2684]: E1101 02:43:18.919705 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f8dc58755-gnqxq_calico-apiserver(b426dedc-f58e-4d16-a987-3056f24fa4d7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 02:43:18.921033 kubelet[2684]: E1101 02:43:18.919806 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:43:19.365573 sshd[5651]: Accepted publickey for core from 147.75.109.163 port 43030 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:19.367587 sshd[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:19.383693 systemd-logind[1490]: New session 23 of user core. Nov 1 02:43:19.391024 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 02:43:20.142707 sshd[5651]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:20.148624 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Nov 1 02:43:20.150836 systemd[1]: sshd@20-10.230.26.18:22-147.75.109.163:43030.service: Deactivated successfully. Nov 1 02:43:20.155608 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 02:43:20.160227 systemd-logind[1490]: Removed session 23. Nov 1 02:43:22.860591 kubelet[2684]: E1101 02:43:22.858822 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-t726m" podUID="e1215547-a56a-4c57-957b-4ea0376bfb33" Nov 1 02:43:25.308590 systemd[1]: Started sshd@21-10.230.26.18:22-147.75.109.163:55362.service - OpenSSH per-connection server daemon (147.75.109.163:55362). Nov 1 02:43:26.290675 sshd[5686]: Accepted publickey for core from 147.75.109.163 port 55362 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:26.295368 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:26.307437 systemd-logind[1490]: New session 24 of user core. Nov 1 02:43:26.315750 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 02:43:27.262298 sshd[5686]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:27.272578 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Nov 1 02:43:27.273990 systemd[1]: sshd@21-10.230.26.18:22-147.75.109.163:55362.service: Deactivated successfully. Nov 1 02:43:27.282296 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 02:43:27.287985 systemd-logind[1490]: Removed session 24. Nov 1 02:43:28.865286 containerd[1506]: time="2025-11-01T02:43:28.863991157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 02:43:28.866722 kubelet[2684]: E1101 02:43:28.866638 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-754cd6d684-rxmrt" podUID="efbd1db3-4d1b-4800-b03d-ce570a8bfb0d" Nov 1 02:43:29.177100 containerd[1506]: time="2025-11-01T02:43:29.176824237Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:43:29.178513 containerd[1506]: time="2025-11-01T02:43:29.178456292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 02:43:29.178849 containerd[1506]: time="2025-11-01T02:43:29.178496985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 02:43:29.179109 kubelet[2684]: E1101 02:43:29.179006 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:43:29.179280 kubelet[2684]: E1101 02:43:29.179129 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 02:43:29.179349 kubelet[2684]: E1101 02:43:29.179276 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 02:43:29.183408 containerd[1506]: time="2025-11-01T02:43:29.183354909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 02:43:29.509378 containerd[1506]: time="2025-11-01T02:43:29.509145954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:43:29.511696 containerd[1506]: time="2025-11-01T02:43:29.511636452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 02:43:29.511881 containerd[1506]: time="2025-11-01T02:43:29.511817361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 02:43:29.512185 kubelet[2684]: E1101 02:43:29.512089 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:43:29.512589 kubelet[2684]: E1101 02:43:29.512189 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 02:43:29.512589 kubelet[2684]: E1101 02:43:29.512356 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-564f484f47-dnhd7_calico-system(0e9edcbd-6bb9-40da-89e3-329c8ee490a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 02:43:29.512589 kubelet[2684]: E1101 02:43:29.512477 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-564f484f47-dnhd7" podUID="0e9edcbd-6bb9-40da-89e3-329c8ee490a3" Nov 1 02:43:29.864568 containerd[1506]: time="2025-11-01T02:43:29.864378436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 02:43:29.873613 kubelet[2684]: E1101 02:43:29.873372 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-bvnkc" podUID="a1c04a34-b552-49b3-a9dc-198853e53df9" Nov 1 02:43:30.189828 containerd[1506]: time="2025-11-01T02:43:30.189611415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:43:30.192390 containerd[1506]: time="2025-11-01T02:43:30.191611713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 02:43:30.192390 containerd[1506]: time="2025-11-01T02:43:30.191869305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 02:43:30.192673 kubelet[2684]: E1101 02:43:30.192321 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:43:30.193510 kubelet[2684]: E1101 02:43:30.192529 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 02:43:30.193770 kubelet[2684]: E1101 02:43:30.193685 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 02:43:30.195973 containerd[1506]: time="2025-11-01T02:43:30.195494966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 02:43:30.516180 containerd[1506]: time="2025-11-01T02:43:30.515990233Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 02:43:30.518468 containerd[1506]: time="2025-11-01T02:43:30.517771211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 02:43:30.518468 containerd[1506]: time="2025-11-01T02:43:30.517874716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 02:43:30.519484 kubelet[2684]: E1101 02:43:30.518907 2684 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:43:30.520488 kubelet[2684]: E1101 02:43:30.519001 2684 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 02:43:30.520627 kubelet[2684]: E1101 02:43:30.520584 2684 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gvm6v_calico-system(2db811c5-1134-445e-9e39-ac0e7ee1b427): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 02:43:30.520805 kubelet[2684]: E1101 02:43:30.520680 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gvm6v" podUID="2db811c5-1134-445e-9e39-ac0e7ee1b427" Nov 1 02:43:32.429891 systemd[1]: Started sshd@22-10.230.26.18:22-147.75.109.163:37442.service - OpenSSH per-connection server daemon (147.75.109.163:37442). Nov 1 02:43:33.353267 sshd[5720]: Accepted publickey for core from 147.75.109.163 port 37442 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 02:43:33.354394 sshd[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 02:43:33.366692 systemd-logind[1490]: New session 25 of user core. Nov 1 02:43:33.373794 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 02:43:33.883740 kubelet[2684]: E1101 02:43:33.883649 2684 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f8dc58755-gnqxq" podUID="b426dedc-f58e-4d16-a987-3056f24fa4d7" Nov 1 02:43:34.145156 sshd[5720]: pam_unix(sshd:session): session closed for user core Nov 1 02:43:34.155725 systemd[1]: sshd@22-10.230.26.18:22-147.75.109.163:37442.service: Deactivated successfully. Nov 1 02:43:34.161955 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 02:43:34.164162 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Nov 1 02:43:34.166388 systemd-logind[1490]: Removed session 25. Nov 1 02:43:37.860940 containerd[1506]: time="2025-11-01T02:43:37.860747405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\""