Apr 16 00:48:30.003445 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:45:03 -00 2026 Apr 16 00:48:30.003502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 00:48:30.003515 kernel: BIOS-provided physical RAM map: Apr 16 00:48:30.003529 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 00:48:30.003554 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 00:48:30.003563 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 00:48:30.005574 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Apr 16 00:48:30.005600 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Apr 16 00:48:30.005610 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 00:48:30.005620 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 00:48:30.005630 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 00:48:30.005640 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 00:48:30.005658 kernel: NX (Execute Disable) protection: active Apr 16 00:48:30.005668 kernel: APIC: Static calls initialized Apr 16 00:48:30.005680 kernel: SMBIOS 2.8 present. Apr 16 00:48:30.005692 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Apr 16 00:48:30.005703 kernel: Hypervisor detected: KVM Apr 16 00:48:30.005721 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 00:48:30.005732 kernel: kvm-clock: using sched offset of 4402710384 cycles Apr 16 00:48:30.005744 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 00:48:30.005755 kernel: tsc: Detected 2799.998 MHz processor Apr 16 00:48:30.005776 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 00:48:30.005787 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 00:48:30.005798 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Apr 16 00:48:30.005809 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 00:48:30.005820 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 00:48:30.005845 kernel: Using GB pages for direct mapping Apr 16 00:48:30.005856 kernel: ACPI: Early table checksum verification disabled Apr 16 00:48:30.005867 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Apr 16 00:48:30.005878 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:48:30.005889 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:48:30.005900 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:48:30.005923 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Apr 16 00:48:30.005933 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:48:30.005943 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:48:30.005958 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:48:30.005969 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 00:48:30.005991 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Apr 16 00:48:30.006001 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Apr 16 00:48:30.006011 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Apr 16 00:48:30.006026 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Apr 16 00:48:30.006049 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Apr 16 00:48:30.006064 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Apr 16 00:48:30.006075 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Apr 16 00:48:30.006086 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 16 00:48:30.006109 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 16 00:48:30.006121 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Apr 16 00:48:30.006144 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Apr 16 00:48:30.006156 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Apr 16 00:48:30.006167 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Apr 16 00:48:30.006184 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Apr 16 00:48:30.006195 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Apr 16 00:48:30.006206 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Apr 16 00:48:30.006217 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Apr 16 00:48:30.006228 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Apr 16 00:48:30.006240 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Apr 16 00:48:30.006251 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Apr 16 00:48:30.006262 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Apr 16 00:48:30.006273 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Apr 16 00:48:30.006290 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Apr 16 00:48:30.006301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 16 00:48:30.006313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 16 00:48:30.006324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Apr 16 00:48:30.006335 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Apr 16 00:48:30.006347 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Apr 16 00:48:30.006359 kernel: Zone ranges: Apr 16 00:48:30.006371 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 00:48:30.006382 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Apr 16 00:48:30.006393 kernel: Normal empty Apr 16 00:48:30.006410 kernel: Movable zone start for each node Apr 16 00:48:30.006421 kernel: Early memory node ranges Apr 16 00:48:30.006433 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 00:48:30.006444 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Apr 16 00:48:30.006455 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Apr 16 00:48:30.006467 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 00:48:30.006478 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 00:48:30.006490 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Apr 16 00:48:30.006501 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 00:48:30.006517 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 00:48:30.006529 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 00:48:30.006552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 00:48:30.006564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 00:48:30.006576 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 00:48:30.006587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 00:48:30.006620 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 00:48:30.006631 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 00:48:30.006642 kernel: TSC deadline timer available Apr 16 00:48:30.006659 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Apr 16 00:48:30.006670 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 00:48:30.006693 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 00:48:30.006704 kernel: Booting paravirtualized kernel on KVM Apr 16 00:48:30.006714 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 00:48:30.006725 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 16 00:48:30.006748 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Apr 16 00:48:30.006759 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Apr 16 00:48:30.006770 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 16 00:48:30.006793 kernel: kvm-guest: PV spinlocks enabled Apr 16 00:48:30.006817 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 00:48:30.006830 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 00:48:30.006842 kernel: random: crng init done Apr 16 00:48:30.006858 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 00:48:30.006870 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 16 00:48:30.006882 kernel: Fallback order for Node 0: 0 Apr 16 00:48:30.006893 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Apr 16 00:48:30.006911 kernel: Policy zone: DMA32 Apr 16 00:48:30.006923 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 00:48:30.006934 kernel: software IO TLB: area num 16. Apr 16 00:48:30.006946 kernel: Memory: 1901600K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 194756K reserved, 0K cma-reserved) Apr 16 00:48:30.006958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 16 00:48:30.006969 kernel: Kernel/User page tables isolation: enabled Apr 16 00:48:30.006981 kernel: ftrace: allocating 37996 entries in 149 pages Apr 16 00:48:30.006992 kernel: ftrace: allocated 149 pages with 4 groups Apr 16 00:48:30.007003 kernel: Dynamic Preempt: voluntary Apr 16 00:48:30.007020 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 00:48:30.007032 kernel: rcu: RCU event tracing is enabled. Apr 16 00:48:30.007044 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 16 00:48:30.007056 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 00:48:30.007068 kernel: Rude variant of Tasks RCU enabled. Apr 16 00:48:30.007095 kernel: Tracing variant of Tasks RCU enabled. Apr 16 00:48:30.007107 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 00:48:30.007119 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 16 00:48:30.007141 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Apr 16 00:48:30.007154 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 00:48:30.007166 kernel: Console: colour VGA+ 80x25 Apr 16 00:48:30.007178 kernel: printk: console [tty0] enabled Apr 16 00:48:30.007196 kernel: printk: console [ttyS0] enabled Apr 16 00:48:30.007208 kernel: ACPI: Core revision 20230628 Apr 16 00:48:30.007220 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 00:48:30.007232 kernel: x2apic enabled Apr 16 00:48:30.007244 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 00:48:30.007261 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Apr 16 00:48:30.007274 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Apr 16 00:48:30.007286 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 00:48:30.007299 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 00:48:30.007311 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 00:48:30.007323 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 00:48:30.007335 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 00:48:30.007346 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 00:48:30.007359 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 16 00:48:30.007370 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 16 00:48:30.007388 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 16 00:48:30.007400 kernel: MDS: Mitigation: Clear CPU buffers Apr 16 00:48:30.007412 kernel: MMIO Stale Data: Unknown: No mitigations Apr 16 00:48:30.007424 kernel: SRBDS: Unknown: Dependent on hypervisor status Apr 16 00:48:30.007435 kernel: active return thunk: its_return_thunk Apr 16 00:48:30.007447 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 00:48:30.007460 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 00:48:30.007472 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 00:48:30.007483 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 00:48:30.007495 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 00:48:30.007528 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 16 00:48:30.009552 kernel: Freeing SMP alternatives memory: 32K Apr 16 00:48:30.009575 kernel: pid_max: default: 32768 minimum: 301 Apr 16 00:48:30.009588 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 16 00:48:30.009600 kernel: landlock: Up and running. Apr 16 00:48:30.009612 kernel: SELinux: Initializing. Apr 16 00:48:30.009625 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 16 00:48:30.009637 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 16 00:48:30.009649 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Apr 16 00:48:30.009661 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 16 00:48:30.009674 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 16 00:48:30.009686 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 16 00:48:30.009707 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Apr 16 00:48:30.009719 kernel: signal: max sigframe size: 1776 Apr 16 00:48:30.009731 kernel: rcu: Hierarchical SRCU implementation. Apr 16 00:48:30.009744 kernel: rcu: Max phase no-delay instances is 400. Apr 16 00:48:30.009756 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 00:48:30.009768 kernel: smp: Bringing up secondary CPUs ... Apr 16 00:48:30.009780 kernel: smpboot: x86: Booting SMP configuration: Apr 16 00:48:30.009792 kernel: .... node #0, CPUs: #1 Apr 16 00:48:30.009804 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Apr 16 00:48:30.009822 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 00:48:30.009834 kernel: smpboot: Max logical packages: 16 Apr 16 00:48:30.009846 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Apr 16 00:48:30.009858 kernel: devtmpfs: initialized Apr 16 00:48:30.009878 kernel: x86/mm: Memory block size: 128MB Apr 16 00:48:30.009890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 00:48:30.009902 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 16 00:48:30.009914 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 00:48:30.009926 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 00:48:30.009944 kernel: audit: initializing netlink subsys (disabled) Apr 16 00:48:30.009956 kernel: audit: type=2000 audit(1776300508.795:1): state=initialized audit_enabled=0 res=1 Apr 16 00:48:30.009968 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 00:48:30.009996 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 00:48:30.010011 kernel: cpuidle: using governor menu Apr 16 00:48:30.010024 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 00:48:30.010036 kernel: dca service started, version 1.12.1 Apr 16 00:48:30.010048 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 16 00:48:30.010060 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 00:48:30.010079 kernel: PCI: Using configuration type 1 for base access Apr 16 00:48:30.010092 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 00:48:30.010104 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 00:48:30.010116 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 00:48:30.010141 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 00:48:30.010154 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 00:48:30.010166 kernel: ACPI: Added _OSI(Module Device) Apr 16 00:48:30.010178 kernel: ACPI: Added _OSI(Processor Device) Apr 16 00:48:30.010190 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 00:48:30.010209 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 00:48:30.010222 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 16 00:48:30.010234 kernel: ACPI: Interpreter enabled Apr 16 00:48:30.010246 kernel: ACPI: PM: (supports S0 S5) Apr 16 00:48:30.010258 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 00:48:30.010270 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 00:48:30.010282 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 00:48:30.010294 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 00:48:30.010306 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 00:48:30.010616 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 00:48:30.010805 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 16 00:48:30.010978 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 16 00:48:30.010997 kernel: PCI host bridge to bus 0000:00 Apr 16 00:48:30.011194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 00:48:30.011354 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 00:48:30.011519 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 00:48:30.013749 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 16 00:48:30.013908 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 00:48:30.014071 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Apr 16 00:48:30.014249 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 00:48:30.014450 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 16 00:48:30.015056 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Apr 16 00:48:30.015269 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Apr 16 00:48:30.015439 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Apr 16 00:48:30.016774 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Apr 16 00:48:30.016970 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 00:48:30.017203 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.017374 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Apr 16 00:48:30.018606 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.018810 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Apr 16 00:48:30.019033 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.019240 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Apr 16 00:48:30.019440 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.020689 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Apr 16 00:48:30.020917 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.021140 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Apr 16 00:48:30.021335 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.021504 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Apr 16 00:48:30.023723 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.023911 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Apr 16 00:48:30.024112 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 16 00:48:30.024318 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Apr 16 00:48:30.024502 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 16 00:48:30.028384 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 16 00:48:30.028640 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Apr 16 00:48:30.028812 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 16 00:48:30.028980 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Apr 16 00:48:30.029184 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 16 00:48:30.029354 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 16 00:48:30.029555 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Apr 16 00:48:30.029743 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Apr 16 00:48:30.029962 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 16 00:48:30.030143 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 00:48:30.030328 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 16 00:48:30.030515 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Apr 16 00:48:30.034011 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Apr 16 00:48:30.034264 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 16 00:48:30.034459 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 16 00:48:30.034724 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Apr 16 00:48:30.034897 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Apr 16 00:48:30.035078 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 16 00:48:30.035257 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 16 00:48:30.035423 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 16 00:48:30.037660 kernel: pci_bus 0000:02: extended config space not accessible Apr 16 00:48:30.037859 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Apr 16 00:48:30.038042 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Apr 16 00:48:30.038246 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 16 00:48:30.038430 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 16 00:48:30.038647 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 16 00:48:30.038821 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Apr 16 00:48:30.038990 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 16 00:48:30.039168 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 16 00:48:30.039338 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 16 00:48:30.039522 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 16 00:48:30.043850 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 16 00:48:30.044032 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 16 00:48:30.044226 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 16 00:48:30.044402 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 16 00:48:30.046193 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 16 00:48:30.046368 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 16 00:48:30.047628 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 16 00:48:30.047854 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 16 00:48:30.048037 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 16 00:48:30.048234 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 16 00:48:30.048423 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 16 00:48:30.048660 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 16 00:48:30.048843 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 16 00:48:30.049032 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 16 00:48:30.049213 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 16 00:48:30.049381 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 16 00:48:30.051655 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 16 00:48:30.051868 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 16 00:48:30.052057 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 16 00:48:30.052077 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 00:48:30.052090 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 00:48:30.052103 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 00:48:30.052116 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 00:48:30.052141 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 00:48:30.052163 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 00:48:30.052176 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 00:48:30.052188 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 00:48:30.052200 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 00:48:30.052213 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 00:48:30.052225 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 00:48:30.052237 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 00:48:30.052249 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 00:48:30.052262 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 00:48:30.052279 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 00:48:30.052292 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 00:48:30.052304 kernel: iommu: Default domain type: Translated Apr 16 00:48:30.052316 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 00:48:30.052329 kernel: PCI: Using ACPI for IRQ routing Apr 16 00:48:30.052341 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 00:48:30.052354 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 00:48:30.052366 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Apr 16 00:48:30.052563 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 00:48:30.053851 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 00:48:30.054021 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 00:48:30.054041 kernel: vgaarb: loaded Apr 16 00:48:30.054054 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 00:48:30.054066 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 00:48:30.054079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 00:48:30.054092 kernel: pnp: PnP ACPI init Apr 16 00:48:30.054296 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 00:48:30.054325 kernel: pnp: PnP ACPI: found 5 devices Apr 16 00:48:30.054339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 00:48:30.054351 kernel: NET: Registered PF_INET protocol family Apr 16 00:48:30.054364 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 00:48:30.054377 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 16 00:48:30.054389 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 00:48:30.054402 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 16 00:48:30.054421 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 16 00:48:30.054440 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 16 00:48:30.054452 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 16 00:48:30.054465 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 16 00:48:30.054478 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 00:48:30.054502 kernel: NET: Registered PF_XDP protocol family Apr 16 00:48:30.055707 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Apr 16 00:48:30.055886 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 16 00:48:30.056070 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 16 00:48:30.056263 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 16 00:48:30.056440 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 16 00:48:30.057668 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 16 00:48:30.057841 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 16 00:48:30.058007 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 16 00:48:30.058188 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 16 00:48:30.058367 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 16 00:48:30.060574 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 16 00:48:30.060772 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 16 00:48:30.060951 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 16 00:48:30.061148 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 16 00:48:30.061318 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 16 00:48:30.061483 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 16 00:48:30.061677 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 16 00:48:30.061884 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 16 00:48:30.062051 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 16 00:48:30.062230 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 16 00:48:30.062395 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 16 00:48:30.064593 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 16 00:48:30.064775 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 16 00:48:30.064948 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 16 00:48:30.065121 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 16 00:48:30.065311 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 16 00:48:30.065516 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 16 00:48:30.065750 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 16 00:48:30.065935 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 16 00:48:30.066136 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 16 00:48:30.066304 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 16 00:48:30.066469 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 16 00:48:30.068688 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 16 00:48:30.068862 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 16 00:48:30.069041 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 16 00:48:30.069232 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 16 00:48:30.069398 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 16 00:48:30.069584 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 16 00:48:30.069763 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 16 00:48:30.069939 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 16 00:48:30.070117 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 16 00:48:30.070318 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 16 00:48:30.070498 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 16 00:48:30.071721 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 16 00:48:30.071894 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 16 00:48:30.072071 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 16 00:48:30.072271 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 16 00:48:30.072437 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 16 00:48:30.072634 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 16 00:48:30.072800 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 16 00:48:30.072961 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 00:48:30.073148 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 00:48:30.073300 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 00:48:30.073460 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 16 00:48:30.079846 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 00:48:30.080065 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Apr 16 00:48:30.080288 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 16 00:48:30.080461 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Apr 16 00:48:30.080674 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 16 00:48:30.080880 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 16 00:48:30.081100 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Apr 16 00:48:30.081317 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 16 00:48:30.081484 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 16 00:48:30.081697 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Apr 16 00:48:30.081860 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 16 00:48:30.082022 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 16 00:48:30.082221 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Apr 16 00:48:30.082385 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 16 00:48:30.082577 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 16 00:48:30.082773 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Apr 16 00:48:30.082937 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 16 00:48:30.083099 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 16 00:48:30.083285 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Apr 16 00:48:30.083463 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 16 00:48:30.083767 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 16 00:48:30.083951 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Apr 16 00:48:30.084117 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Apr 16 00:48:30.084289 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 16 00:48:30.084475 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Apr 16 00:48:30.084659 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 16 00:48:30.084817 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 16 00:48:30.084846 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 00:48:30.084860 kernel: PCI: CLS 0 bytes, default 64 Apr 16 00:48:30.084880 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 16 00:48:30.084894 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Apr 16 00:48:30.084907 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 00:48:30.084920 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Apr 16 00:48:30.084943 kernel: Initialise system trusted keyrings Apr 16 00:48:30.084956 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 16 00:48:30.084969 kernel: Key type asymmetric registered Apr 16 00:48:30.084989 kernel: Asymmetric key parser 'x509' registered Apr 16 00:48:30.085002 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 16 00:48:30.085015 kernel: io scheduler mq-deadline registered Apr 16 00:48:30.085028 kernel: io scheduler kyber registered Apr 16 00:48:30.085041 kernel: io scheduler bfq registered Apr 16 00:48:30.085243 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 16 00:48:30.085417 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 16 00:48:30.089909 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.090189 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 16 00:48:30.090362 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 16 00:48:30.090588 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.090794 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 16 00:48:30.090981 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 16 00:48:30.091169 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.091353 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 16 00:48:30.091561 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 16 00:48:30.091742 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.091940 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 16 00:48:30.092109 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 16 00:48:30.092299 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.092485 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 16 00:48:30.092705 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 16 00:48:30.092932 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.093107 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 16 00:48:30.093292 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 16 00:48:30.093500 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.096785 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 16 00:48:30.097011 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 16 00:48:30.097201 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 16 00:48:30.097223 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 00:48:30.097239 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 00:48:30.097253 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 00:48:30.097267 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 00:48:30.097295 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 00:48:30.097309 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 00:48:30.097322 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 00:48:30.097335 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 00:48:30.097348 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 00:48:30.097589 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 16 00:48:30.097778 kernel: rtc_cmos 00:03: registered as rtc0 Apr 16 00:48:30.097956 kernel: rtc_cmos 00:03: setting system clock to 2026-04-16T00:48:29 UTC (1776300509) Apr 16 00:48:30.098132 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 16 00:48:30.098153 kernel: intel_pstate: CPU model not supported Apr 16 00:48:30.098174 kernel: NET: Registered PF_INET6 protocol family Apr 16 00:48:30.098188 kernel: Segment Routing with IPv6 Apr 16 00:48:30.098201 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 00:48:30.098214 kernel: NET: Registered PF_PACKET protocol family Apr 16 00:48:30.098227 kernel: Key type dns_resolver registered Apr 16 00:48:30.098240 kernel: IPI shorthand broadcast: enabled Apr 16 00:48:30.098253 kernel: sched_clock: Marking stable (1230003706, 223063548)->(1575131836, -122064582) Apr 16 00:48:30.098272 kernel: registered taskstats version 1 Apr 16 00:48:30.098285 kernel: Loading compiled-in X.509 certificates Apr 16 00:48:30.098299 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 6e6d886174c86dc730e1b14e46a1dab518d9b090' Apr 16 00:48:30.098311 kernel: Key type .fscrypt registered Apr 16 00:48:30.098324 kernel: Key type fscrypt-provisioning registered Apr 16 00:48:30.098337 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 00:48:30.098350 kernel: ima: Allocated hash algorithm: sha1 Apr 16 00:48:30.098363 kernel: ima: No architecture policies found Apr 16 00:48:30.098376 kernel: clk: Disabling unused clocks Apr 16 00:48:30.098396 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 16 00:48:30.098416 kernel: Write protecting the kernel read-only data: 36864k Apr 16 00:48:30.098429 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 16 00:48:30.098442 kernel: Run /init as init process Apr 16 00:48:30.098455 kernel: with arguments: Apr 16 00:48:30.098477 kernel: /init Apr 16 00:48:30.098489 kernel: with environment: Apr 16 00:48:30.098502 kernel: HOME=/ Apr 16 00:48:30.098514 kernel: TERM=linux Apr 16 00:48:30.100638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 00:48:30.100665 systemd[1]: Detected virtualization kvm. Apr 16 00:48:30.100679 systemd[1]: Detected architecture x86-64. Apr 16 00:48:30.100693 systemd[1]: Running in initrd. Apr 16 00:48:30.100713 systemd[1]: No hostname configured, using default hostname. Apr 16 00:48:30.100727 systemd[1]: Hostname set to . Apr 16 00:48:30.100741 systemd[1]: Initializing machine ID from VM UUID. Apr 16 00:48:30.100766 systemd[1]: Queued start job for default target initrd.target. Apr 16 00:48:30.100781 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:48:30.100795 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:48:30.100811 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 00:48:30.100825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 00:48:30.100840 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 00:48:30.100854 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 00:48:30.100877 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 00:48:30.100892 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 00:48:30.100906 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:48:30.100920 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:48:30.100934 systemd[1]: Reached target paths.target - Path Units. Apr 16 00:48:30.100949 systemd[1]: Reached target slices.target - Slice Units. Apr 16 00:48:30.100971 systemd[1]: Reached target swap.target - Swaps. Apr 16 00:48:30.100985 systemd[1]: Reached target timers.target - Timer Units. Apr 16 00:48:30.101004 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 00:48:30.101019 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 00:48:30.101034 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 00:48:30.101048 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 16 00:48:30.101063 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:48:30.101077 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 00:48:30.101091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:48:30.101106 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 00:48:30.101120 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 00:48:30.101161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 00:48:30.101175 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 00:48:30.101190 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 00:48:30.101209 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 00:48:30.101224 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 00:48:30.101323 systemd-journald[202]: Collecting audit messages is disabled. Apr 16 00:48:30.101362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:48:30.101377 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 00:48:30.101391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:48:30.101405 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 00:48:30.101426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 00:48:30.101441 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:48:30.101463 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 00:48:30.101477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:48:30.101493 systemd-journald[202]: Journal started Apr 16 00:48:30.101526 systemd-journald[202]: Runtime Journal (/run/log/journal/064023a793804fc7accb440a687fa9b8) is 4.7M, max 38.0M, 33.2M free. Apr 16 00:48:30.046623 systemd-modules-load[203]: Inserted module 'overlay' Apr 16 00:48:30.129168 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 00:48:30.129212 kernel: Bridge firewalling registered Apr 16 00:48:30.129231 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 00:48:30.108311 systemd-modules-load[203]: Inserted module 'br_netfilter' Apr 16 00:48:30.130068 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 00:48:30.137132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:48:30.145923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:48:30.149227 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:48:30.150812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 00:48:30.176079 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:48:30.177256 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:48:30.185801 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 00:48:30.186854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:48:30.198196 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 00:48:30.220049 dracut-cmdline[234]: dracut-dracut-053 Apr 16 00:48:30.224713 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=27643dbc59f658eac8bb37add3a8b4ed010a3c31134319f01549aa493a1f070c Apr 16 00:48:30.240813 systemd-resolved[236]: Positive Trust Anchors: Apr 16 00:48:30.240879 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 00:48:30.240928 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 00:48:30.245866 systemd-resolved[236]: Defaulting to hostname 'linux'. Apr 16 00:48:30.248801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 00:48:30.249902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:48:30.335590 kernel: SCSI subsystem initialized Apr 16 00:48:30.349571 kernel: Loading iSCSI transport class v2.0-870. Apr 16 00:48:30.360566 kernel: iscsi: registered transport (tcp) Apr 16 00:48:30.385565 kernel: iscsi: registered transport (qla4xxx) Apr 16 00:48:30.387558 kernel: QLogic iSCSI HBA Driver Apr 16 00:48:30.441944 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 00:48:30.450844 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 00:48:30.483164 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 00:48:30.483236 kernel: device-mapper: uevent: version 1.0.3 Apr 16 00:48:30.485434 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 16 00:48:30.532583 kernel: raid6: sse2x4 gen() 13962 MB/s Apr 16 00:48:30.550627 kernel: raid6: sse2x2 gen() 9320 MB/s Apr 16 00:48:30.569143 kernel: raid6: sse2x1 gen() 9848 MB/s Apr 16 00:48:30.569185 kernel: raid6: using algorithm sse2x4 gen() 13962 MB/s Apr 16 00:48:30.588124 kernel: raid6: .... xor() 7694 MB/s, rmw enabled Apr 16 00:48:30.588180 kernel: raid6: using ssse3x2 recovery algorithm Apr 16 00:48:30.614625 kernel: xor: automatically using best checksumming function avx Apr 16 00:48:30.800582 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 00:48:30.815226 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 00:48:30.822722 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:48:30.844412 systemd-udevd[419]: Using default interface naming scheme 'v255'. Apr 16 00:48:30.851417 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:48:30.858723 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 00:48:30.889225 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Apr 16 00:48:30.930871 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 00:48:30.937884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 00:48:31.054496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:48:31.064900 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 00:48:31.091181 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 00:48:31.093523 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 00:48:31.096765 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:48:31.097510 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 00:48:31.106731 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 00:48:31.132991 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 00:48:31.176556 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Apr 16 00:48:31.189731 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 16 00:48:31.197571 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 00:48:31.213727 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 00:48:31.213787 kernel: GPT:17805311 != 125829119 Apr 16 00:48:31.213807 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 00:48:31.213823 kernel: GPT:17805311 != 125829119 Apr 16 00:48:31.213839 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 00:48:31.213855 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 00:48:31.235143 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 00:48:31.236789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:48:31.240805 kernel: AVX version of gcm_enc/dec engaged. Apr 16 00:48:31.240860 kernel: AES CTR mode by8 optimization enabled Apr 16 00:48:31.239888 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:48:31.244862 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:48:31.250619 kernel: ACPI: bus type USB registered Apr 16 00:48:31.245035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:48:31.254800 kernel: usbcore: registered new interface driver usbfs Apr 16 00:48:31.245777 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:48:31.258554 kernel: usbcore: registered new interface driver hub Apr 16 00:48:31.260968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:48:31.273179 kernel: libata version 3.00 loaded. Apr 16 00:48:31.273207 kernel: usbcore: registered new device driver usb Apr 16 00:48:31.281297 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 00:48:31.281867 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 00:48:31.291600 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 16 00:48:31.291895 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 00:48:31.305589 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 16 00:48:31.309423 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Apr 16 00:48:31.309723 kernel: scsi host0: ahci Apr 16 00:48:31.309775 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 16 00:48:31.314242 kernel: scsi host1: ahci Apr 16 00:48:31.314305 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 16 00:48:31.319578 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Apr 16 00:48:31.319827 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Apr 16 00:48:31.320057 kernel: hub 1-0:1.0: USB hub found Apr 16 00:48:31.320301 kernel: hub 1-0:1.0: 4 ports detected Apr 16 00:48:31.323276 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 16 00:48:31.323599 kernel: hub 2-0:1.0: USB hub found Apr 16 00:48:31.323913 kernel: hub 2-0:1.0: 4 ports detected Apr 16 00:48:31.337880 kernel: scsi host2: ahci Apr 16 00:48:31.338162 kernel: scsi host3: ahci Apr 16 00:48:31.340554 kernel: scsi host4: ahci Apr 16 00:48:31.348842 kernel: scsi host5: ahci Apr 16 00:48:31.349127 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Apr 16 00:48:31.349151 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Apr 16 00:48:31.349169 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Apr 16 00:48:31.349185 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Apr 16 00:48:31.349201 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Apr 16 00:48:31.349217 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Apr 16 00:48:31.349566 kernel: BTRFS: device fsid 936fcbd8-a8ab-4e87-b115-d77c7a08e984 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (478) Apr 16 00:48:31.353552 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Apr 16 00:48:31.371684 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 00:48:31.446056 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:48:31.452929 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 00:48:31.453824 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 00:48:31.461861 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 00:48:31.473283 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 00:48:31.479724 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 00:48:31.482737 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 00:48:31.490759 disk-uuid[565]: Primary Header is updated. Apr 16 00:48:31.490759 disk-uuid[565]: Secondary Entries is updated. Apr 16 00:48:31.490759 disk-uuid[565]: Secondary Header is updated. Apr 16 00:48:31.499618 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 00:48:31.508973 kernel: GPT:disk_guids don't match. Apr 16 00:48:31.509010 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 00:48:31.509029 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 00:48:31.519902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 00:48:31.523270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:48:31.560298 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 16 00:48:31.660180 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 00:48:31.660256 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 00:48:31.664392 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 00:48:31.664429 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 16 00:48:31.666722 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 00:48:31.666764 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 00:48:31.719555 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 00:48:31.726475 kernel: usbcore: registered new interface driver usbhid Apr 16 00:48:31.726522 kernel: usbhid: USB HID core driver Apr 16 00:48:31.734581 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 16 00:48:31.734635 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Apr 16 00:48:32.517817 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 00:48:32.518800 disk-uuid[566]: The operation has completed successfully. Apr 16 00:48:32.577472 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 00:48:32.577705 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 00:48:32.596879 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 00:48:32.602399 sh[587]: Success Apr 16 00:48:32.617562 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Apr 16 00:48:32.685942 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 00:48:32.687986 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 00:48:32.690602 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 00:48:32.723582 kernel: BTRFS info (device dm-0): first mount of filesystem 936fcbd8-a8ab-4e87-b115-d77c7a08e984 Apr 16 00:48:32.723648 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 00:48:32.723667 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 16 00:48:32.724717 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 16 00:48:32.727266 kernel: BTRFS info (device dm-0): using free space tree Apr 16 00:48:32.737260 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 00:48:32.739594 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 00:48:32.752869 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 00:48:32.756722 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 00:48:32.774140 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 00:48:32.778616 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 00:48:32.778653 kernel: BTRFS info (device vda6): using free space tree Apr 16 00:48:32.784575 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 00:48:32.801083 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 16 00:48:32.802005 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 00:48:32.809337 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 00:48:32.815732 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 00:48:32.953165 ignition[681]: Ignition 2.19.0 Apr 16 00:48:32.953191 ignition[681]: Stage: fetch-offline Apr 16 00:48:32.953290 ignition[681]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:48:32.956809 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 00:48:32.953316 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 00:48:32.953510 ignition[681]: parsed url from cmdline: "" Apr 16 00:48:32.953519 ignition[681]: no config URL provided Apr 16 00:48:32.953541 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 00:48:32.953561 ignition[681]: no config at "/usr/lib/ignition/user.ign" Apr 16 00:48:32.953585 ignition[681]: failed to fetch config: resource requires networking Apr 16 00:48:32.963285 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 00:48:32.953946 ignition[681]: Ignition finished successfully Apr 16 00:48:32.971766 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 00:48:33.004656 systemd-networkd[774]: lo: Link UP Apr 16 00:48:33.004667 systemd-networkd[774]: lo: Gained carrier Apr 16 00:48:33.008835 systemd-networkd[774]: Enumeration completed Apr 16 00:48:33.009441 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:48:33.009446 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:48:33.009646 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 00:48:33.011931 systemd[1]: Reached target network.target - Network. Apr 16 00:48:33.012283 systemd-networkd[774]: eth0: Link UP Apr 16 00:48:33.012292 systemd-networkd[774]: eth0: Gained carrier Apr 16 00:48:33.012312 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:48:33.019697 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 00:48:33.040681 systemd-networkd[774]: eth0: DHCPv4 address 10.230.47.154/30, gateway 10.230.47.153 acquired from 10.230.47.153 Apr 16 00:48:33.043606 ignition[776]: Ignition 2.19.0 Apr 16 00:48:33.043617 ignition[776]: Stage: fetch Apr 16 00:48:33.043857 ignition[776]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:48:33.043876 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 00:48:33.044043 ignition[776]: parsed url from cmdline: "" Apr 16 00:48:33.044049 ignition[776]: no config URL provided Apr 16 00:48:33.044070 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 00:48:33.044098 ignition[776]: no config at "/usr/lib/ignition/user.ign" Apr 16 00:48:33.044248 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 16 00:48:33.044294 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 16 00:48:33.044433 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Apr 16 00:48:33.044746 ignition[776]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 16 00:48:33.245501 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Apr 16 00:48:33.259798 ignition[776]: GET result: OK Apr 16 00:48:33.259968 ignition[776]: parsing config with SHA512: bb57827d407ca1b04561be26eec6e8b64954d256a33ff790b724b2f3af76ec4988799595729218c65666504546dcf256c5c12b20d98c1a0726182368bc857119 Apr 16 00:48:33.266136 unknown[776]: fetched base config from "system" Apr 16 00:48:33.266154 unknown[776]: fetched base config from "system" Apr 16 00:48:33.266919 ignition[776]: fetch: fetch complete Apr 16 00:48:33.266163 unknown[776]: fetched user config from "openstack" Apr 16 00:48:33.266927 ignition[776]: fetch: fetch passed Apr 16 00:48:33.269215 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 00:48:33.266988 ignition[776]: Ignition finished successfully Apr 16 00:48:33.280791 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 00:48:33.299490 ignition[784]: Ignition 2.19.0 Apr 16 00:48:33.299503 ignition[784]: Stage: kargs Apr 16 00:48:33.299779 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:48:33.303349 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 00:48:33.299798 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 00:48:33.301214 ignition[784]: kargs: kargs passed Apr 16 00:48:33.301287 ignition[784]: Ignition finished successfully Apr 16 00:48:33.310754 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 00:48:33.332939 ignition[790]: Ignition 2.19.0 Apr 16 00:48:33.332959 ignition[790]: Stage: disks Apr 16 00:48:33.333206 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 16 00:48:33.333225 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 00:48:33.335872 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 00:48:33.334330 ignition[790]: disks: disks passed Apr 16 00:48:33.337312 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 00:48:33.334397 ignition[790]: Ignition finished successfully Apr 16 00:48:33.338356 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 00:48:33.339797 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 00:48:33.340948 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 00:48:33.342376 systemd[1]: Reached target basic.target - Basic System. Apr 16 00:48:33.349761 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 00:48:33.369490 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 16 00:48:33.373413 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 00:48:33.379658 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 00:48:33.497558 kernel: EXT4-fs (vda9): mounted filesystem 9ac74074-8829-477f-a4c4-5563740ec49b r/w with ordered data mode. Quota mode: none. Apr 16 00:48:33.498167 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 00:48:33.499540 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 00:48:33.506658 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 00:48:33.513770 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 00:48:33.515773 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 00:48:33.517517 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Apr 16 00:48:33.518923 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 00:48:33.518964 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 00:48:33.521692 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 00:48:33.530689 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 00:48:33.534597 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Apr 16 00:48:33.541381 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 00:48:33.541428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 00:48:33.541469 kernel: BTRFS info (device vda6): using free space tree Apr 16 00:48:33.547557 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 00:48:33.550475 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 00:48:33.614500 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 00:48:33.625163 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 16 00:48:33.631699 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 00:48:33.640821 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 00:48:33.746729 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 00:48:33.753676 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 00:48:33.757714 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 00:48:33.768369 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 00:48:33.770716 kernel: BTRFS info (device vda6): last unmount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 00:48:33.799811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 00:48:33.805189 ignition[922]: INFO : Ignition 2.19.0 Apr 16 00:48:33.805189 ignition[922]: INFO : Stage: mount Apr 16 00:48:33.807602 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:48:33.807602 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 00:48:33.807602 ignition[922]: INFO : mount: mount passed Apr 16 00:48:33.807602 ignition[922]: INFO : Ignition finished successfully Apr 16 00:48:33.808690 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 00:48:34.339855 systemd-networkd[774]: eth0: Gained IPv6LL Apr 16 00:48:36.909419 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8be6:24:19ff:fee6:2f9a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8be6:24:19ff:fee6:2f9a/64 assigned by NDisc. Apr 16 00:48:36.909437 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 16 00:48:40.677971 coreos-metadata[808]: Apr 16 00:48:40.677 WARN failed to locate config-drive, using the metadata service API instead Apr 16 00:48:40.700784 coreos-metadata[808]: Apr 16 00:48:40.700 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 16 00:48:40.713771 coreos-metadata[808]: Apr 16 00:48:40.713 INFO Fetch successful Apr 16 00:48:40.716087 coreos-metadata[808]: Apr 16 00:48:40.716 INFO wrote hostname srv-57yav.gb1.brightbox.com to /sysroot/etc/hostname Apr 16 00:48:40.717130 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Apr 16 00:48:40.717303 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Apr 16 00:48:40.730781 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 00:48:40.747760 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 00:48:40.776578 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Apr 16 00:48:40.780570 kernel: BTRFS info (device vda6): first mount of filesystem 90718864-f2fc-45a7-9234-85fc9574bf9c Apr 16 00:48:40.783895 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 00:48:40.783935 kernel: BTRFS info (device vda6): using free space tree Apr 16 00:48:40.789580 kernel: BTRFS info (device vda6): auto enabling async discard Apr 16 00:48:40.792577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 00:48:40.828830 ignition[957]: INFO : Ignition 2.19.0 Apr 16 00:48:40.828830 ignition[957]: INFO : Stage: files Apr 16 00:48:40.830681 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:48:40.830681 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 00:48:40.830681 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 16 00:48:40.834508 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 00:48:40.834508 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 00:48:40.837155 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 00:48:40.838295 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 00:48:40.840058 unknown[957]: wrote ssh authorized keys file for user: core Apr 16 00:48:40.841222 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 00:48:40.843795 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 00:48:40.845181 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 00:48:41.041972 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 00:48:41.363620 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 00:48:41.378221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 00:48:41.378221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 00:48:41.378221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 00:48:41.378221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 00:48:41.378221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 00:48:41.378221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 00:48:41.378221 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 16 00:48:41.859332 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 00:48:44.950427 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 16 00:48:44.950427 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 00:48:44.957973 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 00:48:44.959381 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 00:48:44.959381 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 00:48:44.959381 ignition[957]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 16 00:48:44.959381 ignition[957]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 00:48:44.959381 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 00:48:44.966830 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 00:48:44.966830 ignition[957]: INFO : files: files passed Apr 16 00:48:44.966830 ignition[957]: INFO : Ignition finished successfully Apr 16 00:48:44.963695 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 00:48:44.972843 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 00:48:44.976807 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 00:48:44.993693 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 00:48:44.993881 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 00:48:45.004219 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:48:45.004219 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:48:45.008183 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 00:48:45.009090 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 00:48:45.010997 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 00:48:45.026854 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 00:48:45.062989 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 00:48:45.063180 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 00:48:45.065213 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 00:48:45.066333 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 00:48:45.067845 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 00:48:45.074801 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 00:48:45.094507 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 00:48:45.103781 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 00:48:45.116503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:48:45.118380 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:48:45.120298 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 00:48:45.121738 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 00:48:45.121940 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 00:48:45.123375 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 00:48:45.124343 systemd[1]: Stopped target basic.target - Basic System. Apr 16 00:48:45.125840 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 00:48:45.127291 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 00:48:45.128734 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 00:48:45.130365 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 00:48:45.131934 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 00:48:45.133494 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 00:48:45.135001 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 00:48:45.136522 systemd[1]: Stopped target swap.target - Swaps. Apr 16 00:48:45.137864 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 00:48:45.138032 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 00:48:45.139756 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:48:45.140848 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:48:45.142146 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 00:48:45.143626 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:48:45.144549 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 00:48:45.144714 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 00:48:45.146579 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 00:48:45.146801 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 00:48:45.147849 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 00:48:45.148065 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 00:48:45.155896 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 00:48:45.157574 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 00:48:45.163599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 00:48:45.163817 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:48:45.167661 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 00:48:45.168184 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 00:48:45.176462 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 00:48:45.176647 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 00:48:45.192182 ignition[1010]: INFO : Ignition 2.19.0 Apr 16 00:48:45.192182 ignition[1010]: INFO : Stage: umount Apr 16 00:48:45.195618 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 00:48:45.195618 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 16 00:48:45.195618 ignition[1010]: INFO : umount: umount passed Apr 16 00:48:45.195618 ignition[1010]: INFO : Ignition finished successfully Apr 16 00:48:45.197375 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 00:48:45.197593 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 00:48:45.200153 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 00:48:45.200700 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 00:48:45.200805 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 00:48:45.203987 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 00:48:45.204078 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 00:48:45.205393 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 00:48:45.205468 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 00:48:45.207265 systemd[1]: Stopped target network.target - Network. Apr 16 00:48:45.208574 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 00:48:45.208649 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 00:48:45.209980 systemd[1]: Stopped target paths.target - Path Units. Apr 16 00:48:45.211189 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 00:48:45.214800 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:48:45.215857 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 00:48:45.217281 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 00:48:45.218955 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 00:48:45.219018 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 00:48:45.220286 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 00:48:45.220349 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 00:48:45.221518 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 00:48:45.221605 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 00:48:45.222887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 00:48:45.222951 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 00:48:45.224456 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 00:48:45.227630 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 00:48:45.229381 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 00:48:45.229517 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 00:48:45.229718 systemd-networkd[774]: eth0: DHCPv6 lease lost Apr 16 00:48:45.232340 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 00:48:45.232510 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 00:48:45.236061 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 00:48:45.236257 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 00:48:45.238113 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 00:48:45.238223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:48:45.247839 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 00:48:45.248568 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 00:48:45.248643 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 00:48:45.252601 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:48:45.256209 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 00:48:45.256404 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 00:48:45.260121 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 00:48:45.260384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:48:45.272340 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 00:48:45.272453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 00:48:45.275409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 00:48:45.276337 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:48:45.277111 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 00:48:45.277180 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 00:48:45.279114 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 00:48:45.279177 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 00:48:45.280555 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 00:48:45.280633 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 00:48:45.286746 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 00:48:45.287506 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 00:48:45.287601 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:48:45.289164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 00:48:45.289238 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 00:48:45.291543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 00:48:45.291629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:48:45.293157 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 00:48:45.293231 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:48:45.294690 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 00:48:45.294798 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:48:45.300491 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 00:48:45.300583 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:48:45.302072 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 00:48:45.302146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:48:45.305177 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 00:48:45.306593 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 00:48:45.308940 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 00:48:45.309075 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 00:48:45.311297 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 00:48:45.317783 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 00:48:45.329191 systemd[1]: Switching root. Apr 16 00:48:45.366363 systemd-journald[202]: Journal stopped Apr 16 00:48:46.755787 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Apr 16 00:48:46.755920 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 00:48:46.755958 kernel: SELinux: policy capability open_perms=1 Apr 16 00:48:46.755984 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 00:48:46.756007 kernel: SELinux: policy capability always_check_network=0 Apr 16 00:48:46.756030 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 00:48:46.756049 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 00:48:46.756078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 00:48:46.756108 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 00:48:46.756139 kernel: audit: type=1403 audit(1776300525.607:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 00:48:46.756159 systemd[1]: Successfully loaded SELinux policy in 50.760ms. Apr 16 00:48:46.756201 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.369ms. Apr 16 00:48:46.756229 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 16 00:48:46.756250 systemd[1]: Detected virtualization kvm. Apr 16 00:48:46.756269 systemd[1]: Detected architecture x86-64. Apr 16 00:48:46.756299 systemd[1]: Detected first boot. Apr 16 00:48:46.756327 systemd[1]: Hostname set to . Apr 16 00:48:46.756357 systemd[1]: Initializing machine ID from VM UUID. Apr 16 00:48:46.756382 zram_generator::config[1052]: No configuration found. Apr 16 00:48:46.756418 systemd[1]: Populated /etc with preset unit settings. Apr 16 00:48:46.756439 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 00:48:46.756480 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 00:48:46.756502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 00:48:46.763155 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 00:48:46.763204 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 00:48:46.763236 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 00:48:46.763256 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 00:48:46.763275 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 00:48:46.763294 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 00:48:46.763314 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 00:48:46.763334 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 00:48:46.763353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 00:48:46.763390 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 00:48:46.763411 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 00:48:46.763430 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 00:48:46.763456 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 00:48:46.763477 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 00:48:46.763495 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 00:48:46.763520 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 00:48:46.763561 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 00:48:46.763597 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 00:48:46.763627 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 00:48:46.763648 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 00:48:46.763667 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 00:48:46.763693 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 00:48:46.763712 systemd[1]: Reached target slices.target - Slice Units. Apr 16 00:48:46.763741 systemd[1]: Reached target swap.target - Swaps. Apr 16 00:48:46.763781 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 00:48:46.763817 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 00:48:46.763848 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 00:48:46.763869 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 00:48:46.763894 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 00:48:46.763914 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 00:48:46.763943 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 00:48:46.763964 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 00:48:46.763984 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 00:48:46.764003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:46.764037 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 00:48:46.764062 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 00:48:46.764082 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 00:48:46.764109 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 00:48:46.764140 systemd[1]: Reached target machines.target - Containers. Apr 16 00:48:46.764175 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 00:48:46.764195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:48:46.764215 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 00:48:46.764234 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 00:48:46.764253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 00:48:46.764283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 00:48:46.764303 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 00:48:46.764323 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 00:48:46.764356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 00:48:46.764377 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 00:48:46.764396 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 00:48:46.764415 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 00:48:46.764434 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 00:48:46.764453 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 00:48:46.764486 kernel: fuse: init (API version 7.39) Apr 16 00:48:46.764584 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 00:48:46.764613 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 00:48:46.764649 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 00:48:46.764671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 00:48:46.764741 systemd-journald[1145]: Collecting audit messages is disabled. Apr 16 00:48:46.764788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 00:48:46.764809 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 00:48:46.764829 systemd[1]: Stopped verity-setup.service. Apr 16 00:48:46.764848 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:46.764880 kernel: loop: module loaded Apr 16 00:48:46.764900 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 00:48:46.764919 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 00:48:46.764938 systemd-journald[1145]: Journal started Apr 16 00:48:46.764980 systemd-journald[1145]: Runtime Journal (/run/log/journal/064023a793804fc7accb440a687fa9b8) is 4.7M, max 38.0M, 33.2M free. Apr 16 00:48:46.404239 systemd[1]: Queued start job for default target multi-user.target. Apr 16 00:48:46.422042 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 00:48:46.422744 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 00:48:46.772659 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 00:48:46.773293 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 00:48:46.774892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 00:48:46.775937 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 00:48:46.793566 kernel: ACPI: bus type drm_connector registered Apr 16 00:48:46.797756 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 00:48:46.799166 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 00:48:46.800327 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 00:48:46.801655 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 00:48:46.801996 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 00:48:46.803232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 00:48:46.803745 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 00:48:46.805023 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 00:48:46.805358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 00:48:46.806719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 00:48:46.806937 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 00:48:46.808240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 00:48:46.808666 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 00:48:46.809851 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 00:48:46.810133 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 00:48:46.811415 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 00:48:46.812679 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 00:48:46.813805 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 00:48:46.831396 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 00:48:46.839620 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 00:48:46.853697 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 00:48:46.855180 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 00:48:46.855338 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 00:48:46.857567 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 16 00:48:46.865685 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 00:48:46.877185 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 00:48:46.878675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:48:46.881235 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 00:48:46.895781 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 00:48:46.896860 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 00:48:46.899694 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 00:48:46.900696 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 00:48:46.903435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 00:48:46.908238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 00:48:46.919786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 00:48:46.940157 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 00:48:46.942482 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 00:48:46.946578 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 00:48:46.967324 systemd-journald[1145]: Time spent on flushing to /var/log/journal/064023a793804fc7accb440a687fa9b8 is 115.249ms for 1143 entries. Apr 16 00:48:46.967324 systemd-journald[1145]: System Journal (/var/log/journal/064023a793804fc7accb440a687fa9b8) is 8.0M, max 584.8M, 576.8M free. Apr 16 00:48:47.109620 systemd-journald[1145]: Received client request to flush runtime journal. Apr 16 00:48:47.109710 kernel: loop0: detected capacity change from 0 to 142488 Apr 16 00:48:47.109767 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 00:48:47.109801 kernel: loop1: detected capacity change from 0 to 219192 Apr 16 00:48:47.005898 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 00:48:47.009831 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 00:48:47.016870 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 16 00:48:47.068167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 00:48:47.098763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 00:48:47.108813 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 16 00:48:47.112114 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 00:48:47.120970 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 00:48:47.122233 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 16 00:48:47.144664 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Apr 16 00:48:47.144699 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Apr 16 00:48:47.183702 kernel: loop2: detected capacity change from 0 to 8 Apr 16 00:48:47.179410 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 00:48:47.202909 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 00:48:47.204122 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 16 00:48:47.208549 kernel: loop3: detected capacity change from 0 to 140768 Apr 16 00:48:47.271109 kernel: loop4: detected capacity change from 0 to 142488 Apr 16 00:48:47.295550 kernel: loop5: detected capacity change from 0 to 219192 Apr 16 00:48:47.295498 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 00:48:47.306871 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 00:48:47.324594 kernel: loop6: detected capacity change from 0 to 8 Apr 16 00:48:47.329564 kernel: loop7: detected capacity change from 0 to 140768 Apr 16 00:48:47.349444 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Apr 16 00:48:47.349963 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Apr 16 00:48:47.357374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 00:48:47.360093 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Apr 16 00:48:47.362635 (sd-merge)[1210]: Merged extensions into '/usr'. Apr 16 00:48:47.367380 systemd[1]: Reloading requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 00:48:47.367421 systemd[1]: Reloading... Apr 16 00:48:47.521568 zram_generator::config[1240]: No configuration found. Apr 16 00:48:47.645928 ldconfig[1180]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 00:48:47.777035 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:48:47.843420 systemd[1]: Reloading finished in 475 ms. Apr 16 00:48:47.880095 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 00:48:47.882043 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 00:48:47.893796 systemd[1]: Starting ensure-sysext.service... Apr 16 00:48:47.907776 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 00:48:47.934611 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Apr 16 00:48:47.934637 systemd[1]: Reloading... Apr 16 00:48:47.937790 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 00:48:47.938341 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 00:48:47.946681 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 00:48:47.947110 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 16 00:48:47.947240 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Apr 16 00:48:47.959397 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 00:48:47.959414 systemd-tmpfiles[1297]: Skipping /boot Apr 16 00:48:47.980552 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 00:48:47.980663 systemd-tmpfiles[1297]: Skipping /boot Apr 16 00:48:48.045230 zram_generator::config[1327]: No configuration found. Apr 16 00:48:48.212163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:48:48.278529 systemd[1]: Reloading finished in 343 ms. Apr 16 00:48:48.305450 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 00:48:48.313161 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 00:48:48.326875 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 00:48:48.334752 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 00:48:48.338877 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 00:48:48.358165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 00:48:48.366937 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 00:48:48.372894 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 00:48:48.377716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:48.377997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:48:48.379568 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 00:48:48.388895 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 00:48:48.393873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 00:48:48.394815 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:48:48.394964 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:48.405859 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 00:48:48.408431 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:48.409315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:48:48.409585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:48:48.409728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:48.415276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:48.415589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 00:48:48.422185 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 00:48:48.424733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 00:48:48.424908 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 00:48:48.429149 systemd[1]: Finished ensure-sysext.service. Apr 16 00:48:48.443774 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 00:48:48.445046 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 00:48:48.445638 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 00:48:48.448752 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 00:48:48.463823 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 00:48:48.472642 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 00:48:48.475092 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 00:48:48.475331 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 00:48:48.480300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 00:48:48.486964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 00:48:48.487210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 00:48:48.490552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 00:48:48.496100 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 00:48:48.502144 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 00:48:48.503632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 00:48:48.505942 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 00:48:48.530866 augenrules[1418]: No rules Apr 16 00:48:48.529523 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 00:48:48.531908 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 00:48:48.535238 systemd-udevd[1393]: Using default interface naming scheme 'v255'. Apr 16 00:48:48.535647 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 00:48:48.570433 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 00:48:48.578735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 00:48:48.726725 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 00:48:48.728239 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 00:48:48.738313 systemd-resolved[1390]: Positive Trust Anchors: Apr 16 00:48:48.738332 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 00:48:48.738379 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 00:48:48.758554 systemd-resolved[1390]: Using system hostname 'srv-57yav.gb1.brightbox.com'. Apr 16 00:48:48.762365 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 00:48:48.765139 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 00:48:48.769706 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 00:48:48.771241 systemd-networkd[1433]: lo: Link UP Apr 16 00:48:48.771252 systemd-networkd[1433]: lo: Gained carrier Apr 16 00:48:48.774476 systemd-networkd[1433]: Enumeration completed Apr 16 00:48:48.774744 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 00:48:48.775116 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:48:48.775122 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 00:48:48.776467 systemd-networkd[1433]: eth0: Link UP Apr 16 00:48:48.776473 systemd-networkd[1433]: eth0: Gained carrier Apr 16 00:48:48.776489 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:48:48.776742 systemd[1]: Reached target network.target - Network. Apr 16 00:48:48.786119 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 00:48:48.810688 systemd-networkd[1433]: eth0: DHCPv4 address 10.230.47.154/30, gateway 10.230.47.153 acquired from 10.230.47.153 Apr 16 00:48:48.812768 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Apr 16 00:48:48.834144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1437) Apr 16 00:48:48.854783 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 00:48:48.918567 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 00:48:48.933602 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 16 00:48:48.942961 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 00:48:48.949769 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 00:48:48.953346 kernel: ACPI: button: Power Button [PWRF] Apr 16 00:48:48.982328 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 00:48:49.007577 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 00:48:49.024073 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 16 00:48:49.024132 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 16 00:48:49.024396 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 00:48:49.091698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 00:48:49.265656 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 16 00:48:49.290792 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 16 00:48:49.292098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 00:48:49.312315 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 00:48:49.348160 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 16 00:48:49.349956 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 00:48:49.350738 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 00:48:49.351634 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 00:48:49.352517 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 00:48:49.353800 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 00:48:49.354682 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 00:48:49.355411 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 00:48:49.356159 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 00:48:49.356206 systemd[1]: Reached target paths.target - Path Units. Apr 16 00:48:49.356888 systemd[1]: Reached target timers.target - Timer Units. Apr 16 00:48:49.358855 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 00:48:49.361622 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 00:48:49.367780 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 00:48:49.370263 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 16 00:48:49.371741 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 00:48:49.372520 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 00:48:49.373155 systemd[1]: Reached target basic.target - Basic System. Apr 16 00:48:49.373871 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 00:48:49.373918 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 00:48:49.377695 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 00:48:49.382590 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 16 00:48:49.395805 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 00:48:49.402771 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 00:48:49.406091 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 00:48:49.409816 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 00:48:49.411644 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 00:48:49.415550 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 00:48:49.420935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 00:48:49.425759 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 00:48:49.428166 jq[1477]: false Apr 16 00:48:49.436801 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 00:48:49.455335 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 00:48:49.456917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 00:48:49.459817 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 00:48:49.469837 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 00:48:49.475893 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 00:48:49.480078 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 16 00:48:49.489200 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 00:48:49.489512 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 00:48:49.513556 update_engine[1489]: I20260416 00:48:49.510601 1489 main.cc:92] Flatcar Update Engine starting Apr 16 00:48:49.529203 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 00:48:49.530624 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 00:48:49.546524 jq[1491]: true Apr 16 00:48:49.547358 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 00:48:49.556054 extend-filesystems[1478]: Found loop4 Apr 16 00:48:49.556054 extend-filesystems[1478]: Found loop5 Apr 16 00:48:49.556054 extend-filesystems[1478]: Found loop6 Apr 16 00:48:49.556054 extend-filesystems[1478]: Found loop7 Apr 16 00:48:49.556054 extend-filesystems[1478]: Found vda Apr 16 00:48:49.556054 extend-filesystems[1478]: Found vda1 Apr 16 00:48:49.556054 extend-filesystems[1478]: Found vda2 Apr 16 00:48:49.556054 extend-filesystems[1478]: Found vda3 Apr 16 00:48:49.556054 extend-filesystems[1478]: Found usr Apr 16 00:48:49.556054 extend-filesystems[1478]: Found vda4 Apr 16 00:48:49.590640 extend-filesystems[1478]: Found vda6 Apr 16 00:48:49.590640 extend-filesystems[1478]: Found vda7 Apr 16 00:48:49.590640 extend-filesystems[1478]: Found vda9 Apr 16 00:48:49.590640 extend-filesystems[1478]: Checking size of /dev/vda9 Apr 16 00:48:49.656610 tar[1493]: linux-amd64/LICENSE Apr 16 00:48:49.656610 tar[1493]: linux-amd64/helm Apr 16 00:48:49.579605 dbus-daemon[1476]: [system] SELinux support is enabled Apr 16 00:48:49.579866 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 00:48:49.657485 update_engine[1489]: I20260416 00:48:49.624243 1489 update_check_scheduler.cc:74] Next update check in 6m36s Apr 16 00:48:49.606391 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1433 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 16 00:48:49.584490 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 00:48:49.669801 jq[1504]: true Apr 16 00:48:49.670255 extend-filesystems[1478]: Resized partition /dev/vda9 Apr 16 00:48:49.584541 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 00:48:49.680717 extend-filesystems[1516]: resize2fs 1.47.1 (20-May-2024) Apr 16 00:48:49.691771 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Apr 16 00:48:49.586723 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 00:48:49.586756 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 00:48:49.621779 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 16 00:48:49.634856 systemd[1]: Started update-engine.service - Update Engine. Apr 16 00:48:49.641572 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 00:48:49.669273 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 00:48:49.670638 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 00:48:49.726811 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1443) Apr 16 00:48:49.852291 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 00:48:49.852370 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 00:48:49.853526 systemd-logind[1485]: New seat seat0. Apr 16 00:48:49.860876 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 00:48:49.951562 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Apr 16 00:48:49.953442 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 00:48:49.967877 systemd[1]: Starting sshkeys.service... Apr 16 00:48:50.022206 systemd-networkd[1433]: eth0: Gained IPv6LL Apr 16 00:48:50.029001 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Apr 16 00:48:50.036868 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 00:48:50.042708 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 00:48:50.055035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:48:50.059023 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 16 00:48:50.061907 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 00:48:50.074595 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 00:48:50.078874 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 00:48:50.094218 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 00:48:50.094218 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 16 00:48:50.094218 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 16 00:48:50.115267 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Apr 16 00:48:50.101094 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 00:48:50.102629 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 00:48:50.144096 containerd[1500]: time="2026-04-16T00:48:50.143839712Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 16 00:48:50.164999 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 16 00:48:50.165782 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 16 00:48:50.169734 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1512 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 16 00:48:50.182272 systemd[1]: Starting polkit.service - Authorization Manager... Apr 16 00:48:50.185303 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 00:48:50.193666 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 00:48:50.228723 polkitd[1562]: Started polkitd version 121 Apr 16 00:48:50.235481 containerd[1500]: time="2026-04-16T00:48:50.233441563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:48:50.236090 containerd[1500]: time="2026-04-16T00:48:50.236050923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:48:50.236192 containerd[1500]: time="2026-04-16T00:48:50.236170256Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 16 00:48:50.236361 containerd[1500]: time="2026-04-16T00:48:50.236323388Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 16 00:48:50.236984 containerd[1500]: time="2026-04-16T00:48:50.236759437Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 16 00:48:50.236984 containerd[1500]: time="2026-04-16T00:48:50.236786419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 16 00:48:50.237133 containerd[1500]: time="2026-04-16T00:48:50.237106886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:48:50.237294 containerd[1500]: time="2026-04-16T00:48:50.237270642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:48:50.237753 containerd[1500]: time="2026-04-16T00:48:50.237724698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:48:50.237840 containerd[1500]: time="2026-04-16T00:48:50.237818962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 16 00:48:50.237966 containerd[1500]: time="2026-04-16T00:48:50.237933886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:48:50.238577 containerd[1500]: time="2026-04-16T00:48:50.238041786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 16 00:48:50.238577 containerd[1500]: time="2026-04-16T00:48:50.238222306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:48:50.238902 containerd[1500]: time="2026-04-16T00:48:50.238877756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 16 00:48:50.239406 containerd[1500]: time="2026-04-16T00:48:50.239108008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 16 00:48:50.239406 containerd[1500]: time="2026-04-16T00:48:50.239134894Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 16 00:48:50.239406 containerd[1500]: time="2026-04-16T00:48:50.239272139Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 16 00:48:50.239406 containerd[1500]: time="2026-04-16T00:48:50.239351398Z" level=info msg="metadata content store policy set" policy=shared Apr 16 00:48:50.248060 containerd[1500]: time="2026-04-16T00:48:50.248019142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 16 00:48:50.248570 containerd[1500]: time="2026-04-16T00:48:50.248480036Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 16 00:48:50.248570 containerd[1500]: time="2026-04-16T00:48:50.248515674Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 16 00:48:50.250066 containerd[1500]: time="2026-04-16T00:48:50.248750175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 16 00:48:50.250066 containerd[1500]: time="2026-04-16T00:48:50.249194722Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 16 00:48:50.250066 containerd[1500]: time="2026-04-16T00:48:50.249400167Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251027478Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251763851Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251790228Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251809767Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251831413Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251869813Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251898489Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251921038Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251941802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251960769Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.251987133Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.252007802Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.252043375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.252552 containerd[1500]: time="2026-04-16T00:48:50.252065916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252084478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252109476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252134596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252160895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252193191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252224007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252244088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252275889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252315913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252338649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252357678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252408759Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252467733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252491211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.253033 containerd[1500]: time="2026-04-16T00:48:50.252515213Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254659046Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254798001Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254820204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254839521Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254855306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254874884Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254895845Z" level=info msg="NRI interface is disabled by configuration." Apr 16 00:48:50.257543 containerd[1500]: time="2026-04-16T00:48:50.254915728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 16 00:48:50.257938 containerd[1500]: time="2026-04-16T00:48:50.255335784Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 16 00:48:50.257938 containerd[1500]: time="2026-04-16T00:48:50.255447905Z" level=info msg="Connect containerd service" Apr 16 00:48:50.257938 containerd[1500]: time="2026-04-16T00:48:50.255544477Z" level=info msg="using legacy CRI server" Apr 16 00:48:50.257938 containerd[1500]: time="2026-04-16T00:48:50.255561343Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 00:48:50.257938 containerd[1500]: time="2026-04-16T00:48:50.255726853Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 16 00:48:50.259991 containerd[1500]: time="2026-04-16T00:48:50.259960036Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 00:48:50.260884 containerd[1500]: time="2026-04-16T00:48:50.260230624Z" level=info msg="Start subscribing containerd event" Apr 16 00:48:50.260884 containerd[1500]: time="2026-04-16T00:48:50.260349934Z" level=info msg="Start recovering state" Apr 16 00:48:50.260884 containerd[1500]: time="2026-04-16T00:48:50.260491896Z" level=info msg="Start event monitor" Apr 16 00:48:50.260884 containerd[1500]: time="2026-04-16T00:48:50.260529309Z" level=info msg="Start snapshots syncer" Apr 16 00:48:50.260884 containerd[1500]: time="2026-04-16T00:48:50.260576081Z" level=info msg="Start cni network conf syncer for default" Apr 16 00:48:50.262584 containerd[1500]: time="2026-04-16T00:48:50.262013317Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 00:48:50.263418 containerd[1500]: time="2026-04-16T00:48:50.262677666Z" level=info msg="Start streaming server" Apr 16 00:48:50.264068 polkitd[1562]: Loading rules from directory /etc/polkit-1/rules.d Apr 16 00:48:50.264188 polkitd[1562]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 16 00:48:50.264823 containerd[1500]: time="2026-04-16T00:48:50.264294570Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 00:48:50.265859 containerd[1500]: time="2026-04-16T00:48:50.264957658Z" level=info msg="containerd successfully booted in 0.124748s" Apr 16 00:48:50.265107 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 00:48:50.269232 polkitd[1562]: Finished loading, compiling and executing 2 rules Apr 16 00:48:50.275463 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 16 00:48:50.275949 systemd[1]: Started polkit.service - Authorization Manager. Apr 16 00:48:50.276705 polkitd[1562]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 16 00:48:50.305854 systemd-hostnamed[1512]: Hostname set to (static) Apr 16 00:48:50.331838 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 00:48:50.396680 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 00:48:50.405017 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 00:48:50.437882 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 00:48:50.438237 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 00:48:50.452265 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 00:48:50.482728 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 00:48:50.492157 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 00:48:50.496152 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 00:48:50.499359 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 00:48:50.774360 tar[1493]: linux-amd64/README.md Apr 16 00:48:50.789750 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 00:48:51.242226 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:48:51.263252 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:48:51.530997 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Apr 16 00:48:51.532341 systemd-networkd[1433]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8be6:24:19ff:fee6:2f9a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8be6:24:19ff:fee6:2f9a/64 assigned by NDisc. Apr 16 00:48:51.532347 systemd-networkd[1433]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 16 00:48:51.799749 kubelet[1601]: E0416 00:48:51.799578 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:48:51.803202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:48:51.803678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:48:51.804671 systemd[1]: kubelet.service: Consumed 1.003s CPU time. Apr 16 00:48:52.708513 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Apr 16 00:48:53.403808 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 00:48:53.412237 systemd[1]: Started sshd@0-10.230.47.154:22-20.229.252.112:52708.service - OpenSSH per-connection server daemon (20.229.252.112:52708). Apr 16 00:48:53.559238 sshd[1612]: Accepted publickey for core from 20.229.252.112 port 52708 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:48:53.562407 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:48:53.579141 systemd-logind[1485]: New session 1 of user core. Apr 16 00:48:53.582020 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 00:48:53.590984 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 00:48:53.626285 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 00:48:53.636046 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 00:48:53.659698 (systemd)[1616]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 00:48:53.803253 systemd[1616]: Queued start job for default target default.target. Apr 16 00:48:53.816319 systemd[1616]: Created slice app.slice - User Application Slice. Apr 16 00:48:53.816375 systemd[1616]: Reached target paths.target - Paths. Apr 16 00:48:53.816397 systemd[1616]: Reached target timers.target - Timers. Apr 16 00:48:53.819161 systemd[1616]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 00:48:53.835781 systemd[1616]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 00:48:53.835964 systemd[1616]: Reached target sockets.target - Sockets. Apr 16 00:48:53.835988 systemd[1616]: Reached target basic.target - Basic System. Apr 16 00:48:53.836061 systemd[1616]: Reached target default.target - Main User Target. Apr 16 00:48:53.836130 systemd[1616]: Startup finished in 166ms. Apr 16 00:48:53.836497 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 00:48:53.854889 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 00:48:53.984011 systemd[1]: Started sshd@1-10.230.47.154:22-20.229.252.112:52712.service - OpenSSH per-connection server daemon (20.229.252.112:52712). Apr 16 00:48:54.110999 sshd[1628]: Accepted publickey for core from 20.229.252.112 port 52712 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:48:54.113728 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:48:54.119923 systemd-logind[1485]: New session 2 of user core. Apr 16 00:48:54.127843 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 00:48:54.239261 sshd[1628]: pam_unix(sshd:session): session closed for user core Apr 16 00:48:54.243949 systemd[1]: sshd@1-10.230.47.154:22-20.229.252.112:52712.service: Deactivated successfully. Apr 16 00:48:54.246022 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 00:48:54.247087 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Apr 16 00:48:54.248577 systemd-logind[1485]: Removed session 2. Apr 16 00:48:54.261776 systemd[1]: Started sshd@2-10.230.47.154:22-20.229.252.112:40148.service - OpenSSH per-connection server daemon (20.229.252.112:40148). Apr 16 00:48:54.393620 sshd[1635]: Accepted publickey for core from 20.229.252.112 port 40148 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:48:54.394449 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:48:54.401272 systemd-logind[1485]: New session 3 of user core. Apr 16 00:48:54.415902 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 00:48:54.521461 sshd[1635]: pam_unix(sshd:session): session closed for user core Apr 16 00:48:54.526705 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Apr 16 00:48:54.527661 systemd[1]: sshd@2-10.230.47.154:22-20.229.252.112:40148.service: Deactivated successfully. Apr 16 00:48:54.530253 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 00:48:54.531857 systemd-logind[1485]: Removed session 3. Apr 16 00:48:55.697379 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 16 00:48:55.699596 login[1590]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 16 00:48:55.707606 systemd-logind[1485]: New session 5 of user core. Apr 16 00:48:55.713911 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 00:48:55.717832 systemd-logind[1485]: New session 4 of user core. Apr 16 00:48:55.721733 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 00:48:56.548080 coreos-metadata[1475]: Apr 16 00:48:56.547 WARN failed to locate config-drive, using the metadata service API instead Apr 16 00:48:56.573265 coreos-metadata[1475]: Apr 16 00:48:56.573 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Apr 16 00:48:56.580681 coreos-metadata[1475]: Apr 16 00:48:56.580 INFO Fetch failed with 404: resource not found Apr 16 00:48:56.580681 coreos-metadata[1475]: Apr 16 00:48:56.580 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 16 00:48:56.581508 coreos-metadata[1475]: Apr 16 00:48:56.581 INFO Fetch successful Apr 16 00:48:56.581835 coreos-metadata[1475]: Apr 16 00:48:56.581 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Apr 16 00:48:56.593916 coreos-metadata[1475]: Apr 16 00:48:56.593 INFO Fetch successful Apr 16 00:48:56.594082 coreos-metadata[1475]: Apr 16 00:48:56.594 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Apr 16 00:48:56.610628 coreos-metadata[1475]: Apr 16 00:48:56.610 INFO Fetch successful Apr 16 00:48:56.610828 coreos-metadata[1475]: Apr 16 00:48:56.610 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Apr 16 00:48:56.625693 coreos-metadata[1475]: Apr 16 00:48:56.625 INFO Fetch successful Apr 16 00:48:56.625823 coreos-metadata[1475]: Apr 16 00:48:56.625 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Apr 16 00:48:56.643405 coreos-metadata[1475]: Apr 16 00:48:56.643 INFO Fetch successful Apr 16 00:48:56.667743 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 00:48:56.669799 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 00:48:57.213921 coreos-metadata[1549]: Apr 16 00:48:57.213 WARN failed to locate config-drive, using the metadata service API instead Apr 16 00:48:57.235620 coreos-metadata[1549]: Apr 16 00:48:57.235 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Apr 16 00:48:57.308093 coreos-metadata[1549]: Apr 16 00:48:57.308 INFO Fetch successful Apr 16 00:48:57.308292 coreos-metadata[1549]: Apr 16 00:48:57.308 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 16 00:48:57.336740 coreos-metadata[1549]: Apr 16 00:48:57.336 INFO Fetch successful Apr 16 00:48:57.339383 unknown[1549]: wrote ssh authorized keys file for user: core Apr 16 00:48:57.361787 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Apr 16 00:48:57.363404 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 00:48:57.365415 systemd[1]: Finished sshkeys.service. Apr 16 00:48:57.368888 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 00:48:57.370659 systemd[1]: Startup finished in 1.399s (kernel) + 15.863s (initrd) + 11.813s (userspace) = 29.076s. Apr 16 00:49:02.053967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 00:49:02.067794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:02.297884 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:02.315066 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:49:02.384197 kubelet[1688]: E0416 00:49:02.384103 1688 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:49:02.388011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:49:02.388250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:49:04.549095 systemd[1]: Started sshd@3-10.230.47.154:22-20.229.252.112:46752.service - OpenSSH per-connection server daemon (20.229.252.112:46752). Apr 16 00:49:04.678125 sshd[1697]: Accepted publickey for core from 20.229.252.112 port 46752 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:49:04.679040 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:49:04.685588 systemd-logind[1485]: New session 6 of user core. Apr 16 00:49:04.695767 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 00:49:04.798019 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 16 00:49:04.802627 systemd[1]: sshd@3-10.230.47.154:22-20.229.252.112:46752.service: Deactivated successfully. Apr 16 00:49:04.804625 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 00:49:04.806426 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Apr 16 00:49:04.807807 systemd-logind[1485]: Removed session 6. Apr 16 00:49:04.824213 systemd[1]: Started sshd@4-10.230.47.154:22-20.229.252.112:46762.service - OpenSSH per-connection server daemon (20.229.252.112:46762). Apr 16 00:49:04.964281 sshd[1704]: Accepted publickey for core from 20.229.252.112 port 46762 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:49:04.966765 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:49:04.973094 systemd-logind[1485]: New session 7 of user core. Apr 16 00:49:04.979770 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 00:49:05.077544 sshd[1704]: pam_unix(sshd:session): session closed for user core Apr 16 00:49:05.083362 systemd[1]: sshd@4-10.230.47.154:22-20.229.252.112:46762.service: Deactivated successfully. Apr 16 00:49:05.085876 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 00:49:05.087269 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Apr 16 00:49:05.088632 systemd-logind[1485]: Removed session 7. Apr 16 00:49:05.116232 systemd[1]: Started sshd@5-10.230.47.154:22-20.229.252.112:46772.service - OpenSSH per-connection server daemon (20.229.252.112:46772). Apr 16 00:49:05.231370 sshd[1711]: Accepted publickey for core from 20.229.252.112 port 46772 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:49:05.233264 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:49:05.241116 systemd-logind[1485]: New session 8 of user core. Apr 16 00:49:05.247791 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 00:49:05.348178 sshd[1711]: pam_unix(sshd:session): session closed for user core Apr 16 00:49:05.353175 systemd[1]: sshd@5-10.230.47.154:22-20.229.252.112:46772.service: Deactivated successfully. Apr 16 00:49:05.355148 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 00:49:05.356187 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Apr 16 00:49:05.357381 systemd-logind[1485]: Removed session 8. Apr 16 00:49:05.377885 systemd[1]: Started sshd@6-10.230.47.154:22-20.229.252.112:46776.service - OpenSSH per-connection server daemon (20.229.252.112:46776). Apr 16 00:49:05.507031 sshd[1718]: Accepted publickey for core from 20.229.252.112 port 46776 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:49:05.509206 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:49:05.515634 systemd-logind[1485]: New session 9 of user core. Apr 16 00:49:05.525777 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 00:49:05.627231 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 00:49:05.628222 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:49:05.646002 sudo[1721]: pam_unix(sudo:session): session closed for user root Apr 16 00:49:05.663294 sshd[1718]: pam_unix(sshd:session): session closed for user core Apr 16 00:49:05.668745 systemd[1]: sshd@6-10.230.47.154:22-20.229.252.112:46776.service: Deactivated successfully. Apr 16 00:49:05.671149 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 00:49:05.672365 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Apr 16 00:49:05.673772 systemd-logind[1485]: Removed session 9. Apr 16 00:49:05.693818 systemd[1]: Started sshd@7-10.230.47.154:22-20.229.252.112:46782.service - OpenSSH per-connection server daemon (20.229.252.112:46782). Apr 16 00:49:05.834952 sshd[1726]: Accepted publickey for core from 20.229.252.112 port 46782 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:49:05.835962 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:49:05.844150 systemd-logind[1485]: New session 10 of user core. Apr 16 00:49:05.849726 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 00:49:05.944196 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 00:49:05.944664 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:49:05.950346 sudo[1730]: pam_unix(sudo:session): session closed for user root Apr 16 00:49:05.959758 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 16 00:49:05.960289 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:49:05.985986 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 16 00:49:05.988490 auditctl[1733]: No rules Apr 16 00:49:05.990334 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 00:49:05.990759 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 16 00:49:05.994574 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 16 00:49:06.047175 augenrules[1751]: No rules Apr 16 00:49:06.048961 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 16 00:49:06.051826 sudo[1729]: pam_unix(sudo:session): session closed for user root Apr 16 00:49:06.069488 sshd[1726]: pam_unix(sshd:session): session closed for user core Apr 16 00:49:06.074516 systemd[1]: sshd@7-10.230.47.154:22-20.229.252.112:46782.service: Deactivated successfully. Apr 16 00:49:06.077065 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 00:49:06.079212 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Apr 16 00:49:06.081191 systemd-logind[1485]: Removed session 10. Apr 16 00:49:06.097235 systemd[1]: Started sshd@8-10.230.47.154:22-20.229.252.112:46796.service - OpenSSH per-connection server daemon (20.229.252.112:46796). Apr 16 00:49:06.223671 sshd[1759]: Accepted publickey for core from 20.229.252.112 port 46796 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:49:06.225017 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:49:06.231526 systemd-logind[1485]: New session 11 of user core. Apr 16 00:49:06.237818 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 00:49:06.325768 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 00:49:06.326246 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 00:49:06.763107 (dockerd)[1778]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 00:49:06.763113 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 00:49:07.182092 dockerd[1778]: time="2026-04-16T00:49:07.181809277Z" level=info msg="Starting up" Apr 16 00:49:07.306971 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1260873747-merged.mount: Deactivated successfully. Apr 16 00:49:07.343783 dockerd[1778]: time="2026-04-16T00:49:07.343717305Z" level=info msg="Loading containers: start." Apr 16 00:49:07.484568 kernel: Initializing XFRM netlink socket Apr 16 00:49:07.520505 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Apr 16 00:49:07.586711 systemd-networkd[1433]: docker0: Link UP Apr 16 00:49:07.606736 dockerd[1778]: time="2026-04-16T00:49:07.606636959Z" level=info msg="Loading containers: done." Apr 16 00:49:07.630390 dockerd[1778]: time="2026-04-16T00:49:07.630310388Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 00:49:07.630653 dockerd[1778]: time="2026-04-16T00:49:07.630502809Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 16 00:49:07.630751 dockerd[1778]: time="2026-04-16T00:49:07.630717450Z" level=info msg="Daemon has completed initialization" Apr 16 00:49:07.676846 dockerd[1778]: time="2026-04-16T00:49:07.676148299Z" level=info msg="API listen on /run/docker.sock" Apr 16 00:49:07.676446 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 00:49:08.607308 systemd-timesyncd[1407]: Contacted time server [2a01:7e00::f03c:91ff:fe89:410f]:123 (2.flatcar.pool.ntp.org). Apr 16 00:49:08.607412 systemd-timesyncd[1407]: Initial clock synchronization to Thu 2026-04-16 00:49:08.606980 UTC. Apr 16 00:49:08.607471 systemd-resolved[1390]: Clock change detected. Flushing caches. Apr 16 00:49:09.196589 containerd[1500]: time="2026-04-16T00:49:09.196213211Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\"" Apr 16 00:49:10.487119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556902871.mount: Deactivated successfully. Apr 16 00:49:13.347656 containerd[1500]: time="2026-04-16T00:49:13.347501174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:13.349168 containerd[1500]: time="2026-04-16T00:49:13.349125068Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.7: active requests=0, bytes read=27100522" Apr 16 00:49:13.350554 containerd[1500]: time="2026-04-16T00:49:13.349956550Z" level=info msg="ImageCreate event name:\"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:13.353840 containerd[1500]: time="2026-04-16T00:49:13.353800682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:13.355756 containerd[1500]: time="2026-04-16T00:49:13.355715806Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.7\" with image id \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b96b8464d152a24c81d7f0435fd2198f8486970cd26a9e0e9c20826c73d1441c\", size \"27097113\" in 4.159329151s" Apr 16 00:49:13.355846 containerd[1500]: time="2026-04-16T00:49:13.355772555Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.7\" returns image reference \"sha256:c15709457ff55a861a7259eb631c447f9bf906267615f9d8dcc820635a0bfb95\"" Apr 16 00:49:13.357619 containerd[1500]: time="2026-04-16T00:49:13.357567175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\"" Apr 16 00:49:13.395200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 00:49:13.411464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:13.577367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:13.592470 (kubelet)[1986]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:49:13.665578 kubelet[1986]: E0416 00:49:13.665338 1986 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:49:13.669317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:49:13.669901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:49:20.220004 containerd[1500]: time="2026-04-16T00:49:20.219905107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:20.221409 containerd[1500]: time="2026-04-16T00:49:20.221285327Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.7: active requests=0, bytes read=21252746" Apr 16 00:49:20.223950 containerd[1500]: time="2026-04-16T00:49:20.222244866Z" level=info msg="ImageCreate event name:\"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:20.226216 containerd[1500]: time="2026-04-16T00:49:20.226181692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:20.227995 containerd[1500]: time="2026-04-16T00:49:20.227944836Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.7\" with image id \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7d759bdc4fef10a3fc1ad60ce9439d58e1a4df7ebb22751f7cc0201ce55f280b\", size \"22819085\" in 6.870149737s" Apr 16 00:49:20.228077 containerd[1500]: time="2026-04-16T00:49:20.228001085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.7\" returns image reference \"sha256:23986a24c803336f2a2dfbcaaf0547ee8bcf6638f23bec8967e210909d00a97a\"" Apr 16 00:49:20.229450 containerd[1500]: time="2026-04-16T00:49:20.229412971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\"" Apr 16 00:49:21.787433 containerd[1500]: time="2026-04-16T00:49:21.787366102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:21.788848 containerd[1500]: time="2026-04-16T00:49:21.788805264Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.7: active requests=0, bytes read=15810899" Apr 16 00:49:21.789965 containerd[1500]: time="2026-04-16T00:49:21.789637363Z" level=info msg="ImageCreate event name:\"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:21.794006 containerd[1500]: time="2026-04-16T00:49:21.793975223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:21.795624 containerd[1500]: time="2026-04-16T00:49:21.795588753Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.7\" with image id \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:4ab32f707ff84beaac431797999707757b885196b0b9a52d29cb67f95efce7c1\", size \"17377256\" in 1.566133554s" Apr 16 00:49:21.795763 containerd[1500]: time="2026-04-16T00:49:21.795739986Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.7\" returns image reference \"sha256:568f1856b0e1c464b0b50ab2879ebd535623c1a620b1d2530ba5dd594237dc82\"" Apr 16 00:49:21.796646 containerd[1500]: time="2026-04-16T00:49:21.796424143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\"" Apr 16 00:49:22.334055 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 16 00:49:23.479716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576205240.mount: Deactivated successfully. Apr 16 00:49:23.913451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 00:49:23.923197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:24.112619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:24.127674 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:49:24.166616 containerd[1500]: time="2026-04-16T00:49:24.166280248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:24.167751 containerd[1500]: time="2026-04-16T00:49:24.167407014Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.7: active requests=0, bytes read=25972962" Apr 16 00:49:24.168558 containerd[1500]: time="2026-04-16T00:49:24.168517781Z" level=info msg="ImageCreate event name:\"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:24.174966 containerd[1500]: time="2026-04-16T00:49:24.173301271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:24.174966 containerd[1500]: time="2026-04-16T00:49:24.174378128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.7\" with image id \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\", repo tag \"registry.k8s.io/kube-proxy:v1.34.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:062519bc0a14769e2f98c6bdff7816a17e6252de3f3c9cb102e6be33fe38d9e2\", size \"25971973\" in 2.377916961s" Apr 16 00:49:24.174966 containerd[1500]: time="2026-04-16T00:49:24.174414050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.7\" returns image reference \"sha256:345c2b8919907fbb425a843da24d86a16708ee53a49ad3fa2e6dc229c7b34643\"" Apr 16 00:49:24.175666 containerd[1500]: time="2026-04-16T00:49:24.175640877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 16 00:49:24.184507 kubelet[2020]: E0416 00:49:24.184441 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:49:24.186440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:49:24.186652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:49:24.825066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3243103999.mount: Deactivated successfully. Apr 16 00:49:28.156964 containerd[1500]: time="2026-04-16T00:49:28.156623226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:28.158350 containerd[1500]: time="2026-04-16T00:49:28.158292528Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Apr 16 00:49:28.159465 containerd[1500]: time="2026-04-16T00:49:28.159432947Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:28.164129 containerd[1500]: time="2026-04-16T00:49:28.163367517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:28.165173 containerd[1500]: time="2026-04-16T00:49:28.165127423Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.989360416s" Apr 16 00:49:28.165241 containerd[1500]: time="2026-04-16T00:49:28.165178434Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 16 00:49:28.166440 containerd[1500]: time="2026-04-16T00:49:28.166410793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 00:49:28.747879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721418130.mount: Deactivated successfully. Apr 16 00:49:28.754724 containerd[1500]: time="2026-04-16T00:49:28.754607934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:28.756614 containerd[1500]: time="2026-04-16T00:49:28.756536871Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Apr 16 00:49:28.757390 containerd[1500]: time="2026-04-16T00:49:28.757327352Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:28.760438 containerd[1500]: time="2026-04-16T00:49:28.760381526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:28.762806 containerd[1500]: time="2026-04-16T00:49:28.761686159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 595.150914ms" Apr 16 00:49:28.762806 containerd[1500]: time="2026-04-16T00:49:28.761727900Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 00:49:28.762806 containerd[1500]: time="2026-04-16T00:49:28.762796362Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 16 00:49:29.358472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1538054777.mount: Deactivated successfully. Apr 16 00:49:30.668843 containerd[1500]: time="2026-04-16T00:49:30.668754824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:30.670900 containerd[1500]: time="2026-04-16T00:49:30.670842149Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874825" Apr 16 00:49:30.672242 containerd[1500]: time="2026-04-16T00:49:30.672141613Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:30.679168 containerd[1500]: time="2026-04-16T00:49:30.679092895Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.916263717s" Apr 16 00:49:30.680360 containerd[1500]: time="2026-04-16T00:49:30.679273509Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 16 00:49:30.680360 containerd[1500]: time="2026-04-16T00:49:30.680180400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:34.413700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 00:49:34.422182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:34.621186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:34.632380 (kubelet)[2174]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 00:49:34.685046 kubelet[2174]: E0416 00:49:34.682210 2174 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 00:49:34.685321 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 00:49:34.685686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 00:49:34.938567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:34.945266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:34.980028 systemd[1]: Reloading requested from client PID 2188 ('systemctl') (unit session-11.scope)... Apr 16 00:49:34.980073 systemd[1]: Reloading... Apr 16 00:49:35.131965 zram_generator::config[2223]: No configuration found. Apr 16 00:49:35.343767 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:49:35.450215 systemd[1]: Reloading finished in 469 ms. Apr 16 00:49:35.520383 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 00:49:35.520515 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 00:49:35.521089 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:35.528328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:35.827492 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 00:49:35.829129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:36.064365 update_engine[1489]: I20260416 00:49:36.063107 1489 update_attempter.cc:509] Updating boot flags... Apr 16 00:49:36.112064 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 00:49:36.112064 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 00:49:36.112064 kubelet[2293]: I0416 00:49:36.110721 2293 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 00:49:36.156218 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2306) Apr 16 00:49:36.256621 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2309) Apr 16 00:49:36.354020 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2309) Apr 16 00:49:37.133862 kubelet[2293]: I0416 00:49:37.133818 2293 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 00:49:37.134633 kubelet[2293]: I0416 00:49:37.134475 2293 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 00:49:37.136959 kubelet[2293]: I0416 00:49:37.136216 2293 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 00:49:37.136959 kubelet[2293]: I0416 00:49:37.136250 2293 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 00:49:37.136959 kubelet[2293]: I0416 00:49:37.136561 2293 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 00:49:37.150375 kubelet[2293]: I0416 00:49:37.150260 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 00:49:37.154438 kubelet[2293]: E0416 00:49:37.154378 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.47.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 00:49:37.157095 kubelet[2293]: E0416 00:49:37.156940 2293 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 00:49:37.157211 kubelet[2293]: I0416 00:49:37.157141 2293 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 00:49:37.162675 kubelet[2293]: I0416 00:49:37.162627 2293 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 00:49:37.163743 kubelet[2293]: I0416 00:49:37.163688 2293 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 00:49:37.164088 kubelet[2293]: I0416 00:49:37.163743 2293 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-57yav.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 00:49:37.164334 kubelet[2293]: I0416 00:49:37.164104 2293 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 00:49:37.164334 kubelet[2293]: I0416 00:49:37.164121 2293 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 00:49:37.164447 kubelet[2293]: I0416 00:49:37.164306 2293 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 00:49:37.166050 kubelet[2293]: I0416 00:49:37.166017 2293 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:49:37.166334 kubelet[2293]: I0416 00:49:37.166315 2293 kubelet.go:475] "Attempting to sync node with API server" Apr 16 00:49:37.166334 kubelet[2293]: I0416 00:49:37.166354 2293 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 00:49:37.166453 kubelet[2293]: I0416 00:49:37.166403 2293 kubelet.go:387] "Adding apiserver pod source" Apr 16 00:49:37.166453 kubelet[2293]: I0416 00:49:37.166435 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 00:49:37.171354 kubelet[2293]: I0416 00:49:37.170728 2293 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 00:49:37.171545 kubelet[2293]: I0416 00:49:37.171509 2293 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 00:49:37.171600 kubelet[2293]: I0416 00:49:37.171561 2293 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 00:49:37.171682 kubelet[2293]: W0416 00:49:37.171663 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 00:49:37.175252 kubelet[2293]: E0416 00:49:37.175215 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.47.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-57yav.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 00:49:37.175517 kubelet[2293]: E0416 00:49:37.175487 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.47.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 00:49:37.176221 kubelet[2293]: I0416 00:49:37.176196 2293 server.go:1262] "Started kubelet" Apr 16 00:49:37.181371 kubelet[2293]: E0416 00:49:37.180200 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.47.154:6443/api/v1/namespaces/default/events\": dial tcp 10.230.47.154:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-57yav.gb1.brightbox.com.18a6aff6b4e7d90c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-57yav.gb1.brightbox.com,UID:srv-57yav.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-57yav.gb1.brightbox.com,},FirstTimestamp:2026-04-16 00:49:37.176148236 +0000 UTC m=+1.340834712,LastTimestamp:2026-04-16 00:49:37.176148236 +0000 UTC m=+1.340834712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-57yav.gb1.brightbox.com,}" Apr 16 00:49:37.181685 kubelet[2293]: I0416 00:49:37.181647 2293 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 00:49:37.181871 kubelet[2293]: I0416 00:49:37.181847 2293 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 00:49:37.183267 kubelet[2293]: I0416 00:49:37.183245 2293 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 00:49:37.183425 kubelet[2293]: I0416 00:49:37.183311 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 00:49:37.186885 kubelet[2293]: I0416 00:49:37.186852 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 00:49:37.190870 kubelet[2293]: E0416 00:49:37.190830 2293 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-57yav.gb1.brightbox.com\" not found" Apr 16 00:49:37.193016 kubelet[2293]: I0416 00:49:37.190975 2293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 00:49:37.197290 kubelet[2293]: E0416 00:49:37.197256 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.47.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-57yav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.47.154:6443: connect: connection refused" interval="200ms" Apr 16 00:49:37.197407 kubelet[2293]: I0416 00:49:37.192683 2293 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 00:49:37.198255 kubelet[2293]: I0416 00:49:37.198236 2293 factory.go:223] Registration of the systemd container factory successfully Apr 16 00:49:37.198528 kubelet[2293]: I0416 00:49:37.198490 2293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 00:49:37.198985 kubelet[2293]: I0416 00:49:37.192662 2293 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 00:49:37.199720 kubelet[2293]: I0416 00:49:37.198349 2293 server.go:310] "Adding debug handlers to kubelet server" Apr 16 00:49:37.200203 kubelet[2293]: E0416 00:49:37.200153 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.47.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 00:49:37.203458 kubelet[2293]: I0416 00:49:37.199786 2293 reconciler.go:29] "Reconciler: start to sync state" Apr 16 00:49:37.204352 kubelet[2293]: E0416 00:49:37.204318 2293 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 00:49:37.206557 kubelet[2293]: I0416 00:49:37.206531 2293 factory.go:223] Registration of the containerd container factory successfully Apr 16 00:49:37.227873 kubelet[2293]: I0416 00:49:37.227663 2293 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 00:49:37.228726 kubelet[2293]: I0416 00:49:37.228396 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 00:49:37.228726 kubelet[2293]: I0416 00:49:37.228416 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 00:49:37.228726 kubelet[2293]: I0416 00:49:37.228451 2293 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:49:37.230563 kubelet[2293]: I0416 00:49:37.230542 2293 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 00:49:37.230680 kubelet[2293]: I0416 00:49:37.230663 2293 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 00:49:37.233089 kubelet[2293]: I0416 00:49:37.232491 2293 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 00:49:37.233089 kubelet[2293]: E0416 00:49:37.232593 2293 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 00:49:37.233089 kubelet[2293]: I0416 00:49:37.230845 2293 policy_none.go:49] "None policy: Start" Apr 16 00:49:37.233089 kubelet[2293]: I0416 00:49:37.232734 2293 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 00:49:37.233089 kubelet[2293]: I0416 00:49:37.232767 2293 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 00:49:37.234480 kubelet[2293]: E0416 00:49:37.234419 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.47.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 00:49:37.234699 kubelet[2293]: I0416 00:49:37.234682 2293 policy_none.go:47] "Start" Apr 16 00:49:37.242305 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 00:49:37.253403 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 00:49:37.257901 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 00:49:37.271636 kubelet[2293]: E0416 00:49:37.271318 2293 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 00:49:37.271796 kubelet[2293]: I0416 00:49:37.271647 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 00:49:37.271796 kubelet[2293]: I0416 00:49:37.271670 2293 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 00:49:37.272687 kubelet[2293]: I0416 00:49:37.272155 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 00:49:37.275660 kubelet[2293]: E0416 00:49:37.275501 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 00:49:37.275660 kubelet[2293]: E0416 00:49:37.275572 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-57yav.gb1.brightbox.com\" not found" Apr 16 00:49:37.350880 systemd[1]: Created slice kubepods-burstable-podfad9cbf33c68bf88f2e084379e9327e3.slice - libcontainer container kubepods-burstable-podfad9cbf33c68bf88f2e084379e9327e3.slice. Apr 16 00:49:37.374768 kubelet[2293]: I0416 00:49:37.374705 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.375218 kubelet[2293]: E0416 00:49:37.375187 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.47.154:6443/api/v1/nodes\": dial tcp 10.230.47.154:6443: connect: connection refused" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.377138 kubelet[2293]: E0416 00:49:37.377100 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.380714 systemd[1]: Created slice kubepods-burstable-podf89b68dc5e1bf4aecabde7fa86e29410.slice - libcontainer container kubepods-burstable-podf89b68dc5e1bf4aecabde7fa86e29410.slice. Apr 16 00:49:37.394616 kubelet[2293]: E0416 00:49:37.394490 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.398389 kubelet[2293]: E0416 00:49:37.398021 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.47.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-57yav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.47.154:6443: connect: connection refused" interval="400ms" Apr 16 00:49:37.401311 systemd[1]: Created slice kubepods-burstable-podc5ac22e30fa74c9cffa815257de7d042.slice - libcontainer container kubepods-burstable-podc5ac22e30fa74c9cffa815257de7d042.slice. Apr 16 00:49:37.403757 kubelet[2293]: E0416 00:49:37.403534 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405358 kubelet[2293]: I0416 00:49:37.405101 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-flexvolume-dir\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405358 kubelet[2293]: I0416 00:49:37.405144 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-k8s-certs\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405358 kubelet[2293]: I0416 00:49:37.405171 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-kubeconfig\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405358 kubelet[2293]: I0416 00:49:37.405196 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f89b68dc5e1bf4aecabde7fa86e29410-kubeconfig\") pod \"kube-scheduler-srv-57yav.gb1.brightbox.com\" (UID: \"f89b68dc5e1bf4aecabde7fa86e29410\") " pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405358 kubelet[2293]: I0416 00:49:37.405219 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5ac22e30fa74c9cffa815257de7d042-ca-certs\") pod \"kube-apiserver-srv-57yav.gb1.brightbox.com\" (UID: \"c5ac22e30fa74c9cffa815257de7d042\") " pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405627 kubelet[2293]: I0416 00:49:37.405243 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5ac22e30fa74c9cffa815257de7d042-k8s-certs\") pod \"kube-apiserver-srv-57yav.gb1.brightbox.com\" (UID: \"c5ac22e30fa74c9cffa815257de7d042\") " pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405627 kubelet[2293]: I0416 00:49:37.405269 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-ca-certs\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405627 kubelet[2293]: I0416 00:49:37.405321 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.405627 kubelet[2293]: I0416 00:49:37.405362 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5ac22e30fa74c9cffa815257de7d042-usr-share-ca-certificates\") pod \"kube-apiserver-srv-57yav.gb1.brightbox.com\" (UID: \"c5ac22e30fa74c9cffa815257de7d042\") " pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.578963 kubelet[2293]: I0416 00:49:37.578888 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.579495 kubelet[2293]: E0416 00:49:37.579452 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.47.154:6443/api/v1/nodes\": dial tcp 10.230.47.154:6443: connect: connection refused" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.681906 containerd[1500]: time="2026-04-16T00:49:37.681807268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-57yav.gb1.brightbox.com,Uid:fad9cbf33c68bf88f2e084379e9327e3,Namespace:kube-system,Attempt:0,}" Apr 16 00:49:37.699743 containerd[1500]: time="2026-04-16T00:49:37.699381911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-57yav.gb1.brightbox.com,Uid:f89b68dc5e1bf4aecabde7fa86e29410,Namespace:kube-system,Attempt:0,}" Apr 16 00:49:37.706212 containerd[1500]: time="2026-04-16T00:49:37.706164934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-57yav.gb1.brightbox.com,Uid:c5ac22e30fa74c9cffa815257de7d042,Namespace:kube-system,Attempt:0,}" Apr 16 00:49:37.799714 kubelet[2293]: E0416 00:49:37.799659 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.47.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-57yav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.47.154:6443: connect: connection refused" interval="800ms" Apr 16 00:49:37.984179 kubelet[2293]: I0416 00:49:37.983732 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:37.984572 kubelet[2293]: E0416 00:49:37.984340 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.47.154:6443/api/v1/nodes\": dial tcp 10.230.47.154:6443: connect: connection refused" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:38.177184 kubelet[2293]: E0416 00:49:38.177125 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.47.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-57yav.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 00:49:38.219761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889959077.mount: Deactivated successfully. Apr 16 00:49:38.226997 containerd[1500]: time="2026-04-16T00:49:38.226276189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:49:38.231543 containerd[1500]: time="2026-04-16T00:49:38.231441260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 16 00:49:38.232265 containerd[1500]: time="2026-04-16T00:49:38.232219211Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:49:38.233440 containerd[1500]: time="2026-04-16T00:49:38.233403682Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:49:38.235029 containerd[1500]: time="2026-04-16T00:49:38.234860813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:49:38.235211 containerd[1500]: time="2026-04-16T00:49:38.235173390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 00:49:38.236130 containerd[1500]: time="2026-04-16T00:49:38.236029349Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 16 00:49:38.240978 containerd[1500]: time="2026-04-16T00:49:38.240828323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 00:49:38.242968 containerd[1500]: time="2026-04-16T00:49:38.242897399Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.8673ms" Apr 16 00:49:38.244855 containerd[1500]: time="2026-04-16T00:49:38.244627731Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.382412ms" Apr 16 00:49:38.248754 containerd[1500]: time="2026-04-16T00:49:38.248722024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.24615ms" Apr 16 00:49:38.429249 containerd[1500]: time="2026-04-16T00:49:38.429111125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:49:38.431018 containerd[1500]: time="2026-04-16T00:49:38.429226151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:49:38.431018 containerd[1500]: time="2026-04-16T00:49:38.429547567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:38.431018 containerd[1500]: time="2026-04-16T00:49:38.429800468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:38.435686 containerd[1500]: time="2026-04-16T00:49:38.435439937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:49:38.435686 containerd[1500]: time="2026-04-16T00:49:38.435494386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:49:38.435686 containerd[1500]: time="2026-04-16T00:49:38.435510279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:38.435686 containerd[1500]: time="2026-04-16T00:49:38.435595693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:38.437028 containerd[1500]: time="2026-04-16T00:49:38.436924816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:49:38.437122 containerd[1500]: time="2026-04-16T00:49:38.437069372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:49:38.437540 containerd[1500]: time="2026-04-16T00:49:38.437175699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:38.437540 containerd[1500]: time="2026-04-16T00:49:38.437415507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:38.479704 systemd[1]: Started cri-containerd-9acb10cef35016a6c81fd8b9c2333296ce67dd054f954fab91b7d4ab62b6cffa.scope - libcontainer container 9acb10cef35016a6c81fd8b9c2333296ce67dd054f954fab91b7d4ab62b6cffa. Apr 16 00:49:38.488879 kubelet[2293]: E0416 00:49:38.488749 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.47.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 00:49:38.491157 systemd[1]: Started cri-containerd-7f56d6db086b9c87a0d6b72e8dcbdd29cb873ad7bfef7ec8b897aa4086146bae.scope - libcontainer container 7f56d6db086b9c87a0d6b72e8dcbdd29cb873ad7bfef7ec8b897aa4086146bae. Apr 16 00:49:38.494900 systemd[1]: Started cri-containerd-c362e3d2c129002e6dcfb388db1c662dcb443ba9e5a7a7bbc4950dc2548f1d95.scope - libcontainer container c362e3d2c129002e6dcfb388db1c662dcb443ba9e5a7a7bbc4950dc2548f1d95. Apr 16 00:49:38.584747 containerd[1500]: time="2026-04-16T00:49:38.584691731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-57yav.gb1.brightbox.com,Uid:f89b68dc5e1bf4aecabde7fa86e29410,Namespace:kube-system,Attempt:0,} returns sandbox id \"9acb10cef35016a6c81fd8b9c2333296ce67dd054f954fab91b7d4ab62b6cffa\"" Apr 16 00:49:38.600655 kubelet[2293]: E0416 00:49:38.600516 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.47.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-57yav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.47.154:6443: connect: connection refused" interval="1.6s" Apr 16 00:49:38.601664 containerd[1500]: time="2026-04-16T00:49:38.601319903Z" level=info msg="CreateContainer within sandbox \"9acb10cef35016a6c81fd8b9c2333296ce67dd054f954fab91b7d4ab62b6cffa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 00:49:38.629952 containerd[1500]: time="2026-04-16T00:49:38.629866756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-57yav.gb1.brightbox.com,Uid:fad9cbf33c68bf88f2e084379e9327e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f56d6db086b9c87a0d6b72e8dcbdd29cb873ad7bfef7ec8b897aa4086146bae\"" Apr 16 00:49:38.632986 containerd[1500]: time="2026-04-16T00:49:38.631067133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-57yav.gb1.brightbox.com,Uid:c5ac22e30fa74c9cffa815257de7d042,Namespace:kube-system,Attempt:0,} returns sandbox id \"c362e3d2c129002e6dcfb388db1c662dcb443ba9e5a7a7bbc4950dc2548f1d95\"" Apr 16 00:49:38.637352 containerd[1500]: time="2026-04-16T00:49:38.637318965Z" level=info msg="CreateContainer within sandbox \"9acb10cef35016a6c81fd8b9c2333296ce67dd054f954fab91b7d4ab62b6cffa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e270ca51f8f9f6ecce39476be6a8bf490da700e22fe17b088d9d7c9a05ef8e23\"" Apr 16 00:49:38.638023 containerd[1500]: time="2026-04-16T00:49:38.637989263Z" level=info msg="CreateContainer within sandbox \"c362e3d2c129002e6dcfb388db1c662dcb443ba9e5a7a7bbc4950dc2548f1d95\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 00:49:38.638319 containerd[1500]: time="2026-04-16T00:49:38.638291157Z" level=info msg="StartContainer for \"e270ca51f8f9f6ecce39476be6a8bf490da700e22fe17b088d9d7c9a05ef8e23\"" Apr 16 00:49:38.642118 containerd[1500]: time="2026-04-16T00:49:38.642075989Z" level=info msg="CreateContainer within sandbox \"7f56d6db086b9c87a0d6b72e8dcbdd29cb873ad7bfef7ec8b897aa4086146bae\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 00:49:38.675840 kubelet[2293]: E0416 00:49:38.675182 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.47.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 00:49:38.678712 containerd[1500]: time="2026-04-16T00:49:38.678670120Z" level=info msg="CreateContainer within sandbox \"c362e3d2c129002e6dcfb388db1c662dcb443ba9e5a7a7bbc4950dc2548f1d95\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e5fccfc71bd73820521108f2fa49d47489e9b77d864bf568bba0495514c201d\"" Apr 16 00:49:38.680118 systemd[1]: Started cri-containerd-e270ca51f8f9f6ecce39476be6a8bf490da700e22fe17b088d9d7c9a05ef8e23.scope - libcontainer container e270ca51f8f9f6ecce39476be6a8bf490da700e22fe17b088d9d7c9a05ef8e23. Apr 16 00:49:38.680686 containerd[1500]: time="2026-04-16T00:49:38.680490247Z" level=info msg="StartContainer for \"5e5fccfc71bd73820521108f2fa49d47489e9b77d864bf568bba0495514c201d\"" Apr 16 00:49:38.692136 kubelet[2293]: E0416 00:49:38.692085 2293 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.47.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.47.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 00:49:38.743146 systemd[1]: Started cri-containerd-5e5fccfc71bd73820521108f2fa49d47489e9b77d864bf568bba0495514c201d.scope - libcontainer container 5e5fccfc71bd73820521108f2fa49d47489e9b77d864bf568bba0495514c201d. Apr 16 00:49:38.758223 containerd[1500]: time="2026-04-16T00:49:38.758174214Z" level=info msg="CreateContainer within sandbox \"7f56d6db086b9c87a0d6b72e8dcbdd29cb873ad7bfef7ec8b897aa4086146bae\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"48037674c83497940a43298ce48c397d8fdc6efbea870bb4abba713bb94c3341\"" Apr 16 00:49:38.759651 containerd[1500]: time="2026-04-16T00:49:38.759613530Z" level=info msg="StartContainer for \"48037674c83497940a43298ce48c397d8fdc6efbea870bb4abba713bb94c3341\"" Apr 16 00:49:38.770863 containerd[1500]: time="2026-04-16T00:49:38.770821491Z" level=info msg="StartContainer for \"e270ca51f8f9f6ecce39476be6a8bf490da700e22fe17b088d9d7c9a05ef8e23\" returns successfully" Apr 16 00:49:38.788607 kubelet[2293]: I0416 00:49:38.788555 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:38.790334 kubelet[2293]: E0416 00:49:38.790084 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.47.154:6443/api/v1/nodes\": dial tcp 10.230.47.154:6443: connect: connection refused" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:38.812111 systemd[1]: Started cri-containerd-48037674c83497940a43298ce48c397d8fdc6efbea870bb4abba713bb94c3341.scope - libcontainer container 48037674c83497940a43298ce48c397d8fdc6efbea870bb4abba713bb94c3341. Apr 16 00:49:38.846068 containerd[1500]: time="2026-04-16T00:49:38.846012654Z" level=info msg="StartContainer for \"5e5fccfc71bd73820521108f2fa49d47489e9b77d864bf568bba0495514c201d\" returns successfully" Apr 16 00:49:38.892524 containerd[1500]: time="2026-04-16T00:49:38.892461626Z" level=info msg="StartContainer for \"48037674c83497940a43298ce48c397d8fdc6efbea870bb4abba713bb94c3341\" returns successfully" Apr 16 00:49:39.263757 kubelet[2293]: E0416 00:49:39.259638 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:39.263757 kubelet[2293]: E0416 00:49:39.260093 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:39.264371 kubelet[2293]: E0416 00:49:39.264047 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:40.262956 kubelet[2293]: E0416 00:49:40.262726 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:40.264164 kubelet[2293]: E0416 00:49:40.264141 2293 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:40.395283 kubelet[2293]: I0416 00:49:40.394910 2293 kubelet_node_status.go:75] "Attempting to register node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.056331 kubelet[2293]: E0416 00:49:41.056258 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-57yav.gb1.brightbox.com\" not found" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.167780 kubelet[2293]: I0416 00:49:41.167245 2293 kubelet_node_status.go:78] "Successfully registered node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.173099 kubelet[2293]: I0416 00:49:41.173077 2293 apiserver.go:52] "Watching apiserver" Apr 16 00:49:41.194348 kubelet[2293]: I0416 00:49:41.193348 2293 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.198227 kubelet[2293]: I0416 00:49:41.198201 2293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 00:49:41.207563 kubelet[2293]: E0416 00:49:41.207229 2293 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.207563 kubelet[2293]: I0416 00:49:41.207269 2293 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.212969 kubelet[2293]: E0416 00:49:41.211961 2293 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-57yav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.212969 kubelet[2293]: I0416 00:49:41.211991 2293 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.216953 kubelet[2293]: E0416 00:49:41.216910 2293 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-57yav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.264486 kubelet[2293]: I0416 00:49:41.264204 2293 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.271712 kubelet[2293]: E0416 00:49:41.271337 2293 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-57yav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.649997 kubelet[2293]: I0416 00:49:41.649919 2293 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:41.652747 kubelet[2293]: E0416 00:49:41.652611 2293 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-57yav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:43.092116 systemd[1]: Reloading requested from client PID 2597 ('systemctl') (unit session-11.scope)... Apr 16 00:49:43.092579 systemd[1]: Reloading... Apr 16 00:49:43.195009 zram_generator::config[2632]: No configuration found. Apr 16 00:49:43.394139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 16 00:49:43.524667 systemd[1]: Reloading finished in 431 ms. Apr 16 00:49:43.583491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:43.601786 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 00:49:43.602156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:43.602231 systemd[1]: kubelet.service: Consumed 1.459s CPU time, 122.5M memory peak, 0B memory swap peak. Apr 16 00:49:43.609291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 00:49:43.817331 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 00:49:43.829049 (kubelet)[2699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 00:49:43.941899 kubelet[2699]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 00:49:43.941899 kubelet[2699]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 00:49:43.943314 kubelet[2699]: I0416 00:49:43.941984 2699 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 00:49:43.952280 kubelet[2699]: I0416 00:49:43.952236 2699 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 16 00:49:43.952440 kubelet[2699]: I0416 00:49:43.952422 2699 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 00:49:43.957861 kubelet[2699]: I0416 00:49:43.957838 2699 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 00:49:43.958064 kubelet[2699]: I0416 00:49:43.958043 2699 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 00:49:43.958503 kubelet[2699]: I0416 00:49:43.958483 2699 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 00:49:43.960333 kubelet[2699]: I0416 00:49:43.960310 2699 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 00:49:43.965010 kubelet[2699]: I0416 00:49:43.964973 2699 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 00:49:43.968772 kubelet[2699]: E0416 00:49:43.968741 2699 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 16 00:49:43.969183 kubelet[2699]: I0416 00:49:43.969023 2699 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 16 00:49:43.974905 kubelet[2699]: I0416 00:49:43.974701 2699 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 00:49:43.978762 kubelet[2699]: I0416 00:49:43.978259 2699 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 00:49:43.978762 kubelet[2699]: I0416 00:49:43.978316 2699 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-57yav.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 00:49:43.978762 kubelet[2699]: I0416 00:49:43.978479 2699 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 00:49:43.978762 kubelet[2699]: I0416 00:49:43.978506 2699 container_manager_linux.go:306] "Creating device plugin manager" Apr 16 00:49:43.979123 kubelet[2699]: I0416 00:49:43.978536 2699 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 00:49:43.979301 kubelet[2699]: I0416 00:49:43.979282 2699 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:49:43.979765 kubelet[2699]: I0416 00:49:43.979607 2699 kubelet.go:475] "Attempting to sync node with API server" Apr 16 00:49:43.979765 kubelet[2699]: I0416 00:49:43.979631 2699 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 00:49:43.979765 kubelet[2699]: I0416 00:49:43.979675 2699 kubelet.go:387] "Adding apiserver pod source" Apr 16 00:49:43.979765 kubelet[2699]: I0416 00:49:43.979699 2699 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 00:49:43.989960 kubelet[2699]: I0416 00:49:43.989349 2699 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 16 00:49:43.990200 kubelet[2699]: I0416 00:49:43.990172 2699 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 00:49:43.990310 kubelet[2699]: I0416 00:49:43.990215 2699 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 00:49:44.009717 kubelet[2699]: I0416 00:49:44.009267 2699 server.go:1262] "Started kubelet" Apr 16 00:49:44.012346 kubelet[2699]: I0416 00:49:44.012307 2699 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 00:49:44.013514 kubelet[2699]: I0416 00:49:44.012495 2699 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 00:49:44.015969 kubelet[2699]: I0416 00:49:44.013986 2699 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 00:49:44.015969 kubelet[2699]: I0416 00:49:44.014168 2699 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 00:49:44.015969 kubelet[2699]: I0416 00:49:44.014294 2699 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 00:49:44.015969 kubelet[2699]: I0416 00:49:44.015404 2699 server.go:310] "Adding debug handlers to kubelet server" Apr 16 00:49:44.021508 kubelet[2699]: I0416 00:49:44.021467 2699 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 00:49:44.022683 kubelet[2699]: I0416 00:49:44.022664 2699 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 16 00:49:44.023379 kubelet[2699]: I0416 00:49:44.023254 2699 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 00:49:44.023715 kubelet[2699]: I0416 00:49:44.023697 2699 reconciler.go:29] "Reconciler: start to sync state" Apr 16 00:49:44.029223 kubelet[2699]: E0416 00:49:44.029093 2699 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 00:49:44.032143 kubelet[2699]: I0416 00:49:44.032043 2699 factory.go:223] Registration of the systemd container factory successfully Apr 16 00:49:44.032511 kubelet[2699]: I0416 00:49:44.032474 2699 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 00:49:44.043180 kubelet[2699]: I0416 00:49:44.043151 2699 factory.go:223] Registration of the containerd container factory successfully Apr 16 00:49:44.059872 kubelet[2699]: I0416 00:49:44.059807 2699 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 00:49:44.069606 kubelet[2699]: I0416 00:49:44.068389 2699 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 00:49:44.069779 kubelet[2699]: I0416 00:49:44.069760 2699 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 16 00:49:44.069965 kubelet[2699]: I0416 00:49:44.069916 2699 kubelet.go:2428] "Starting kubelet main sync loop" Apr 16 00:49:44.070274 kubelet[2699]: E0416 00:49:44.070229 2699 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 00:49:44.127500 kubelet[2699]: I0416 00:49:44.127467 2699 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.128963 2699 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.128999 2699 state_mem.go:36] "Initialized new in-memory state store" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.129172 2699 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.129189 2699 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.129211 2699 policy_none.go:49] "None policy: Start" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.129224 2699 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.129240 2699 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.129413 2699 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 00:49:44.130059 kubelet[2699]: I0416 00:49:44.129445 2699 policy_none.go:47] "Start" Apr 16 00:49:44.140576 kubelet[2699]: E0416 00:49:44.140537 2699 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 00:49:44.141252 kubelet[2699]: I0416 00:49:44.140791 2699 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 00:49:44.141252 kubelet[2699]: I0416 00:49:44.140834 2699 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 00:49:44.141252 kubelet[2699]: I0416 00:49:44.141231 2699 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 00:49:44.149033 kubelet[2699]: E0416 00:49:44.148884 2699 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 00:49:44.177622 kubelet[2699]: I0416 00:49:44.177572 2699 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.180880 kubelet[2699]: I0416 00:49:44.180849 2699 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.190428 kubelet[2699]: I0416 00:49:44.177921 2699 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.205633 kubelet[2699]: I0416 00:49:44.205190 2699 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 00:49:44.210386 kubelet[2699]: I0416 00:49:44.210236 2699 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 00:49:44.212080 kubelet[2699]: I0416 00:49:44.211947 2699 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 00:49:44.224888 kubelet[2699]: I0416 00:49:44.224779 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c5ac22e30fa74c9cffa815257de7d042-usr-share-ca-certificates\") pod \"kube-apiserver-srv-57yav.gb1.brightbox.com\" (UID: \"c5ac22e30fa74c9cffa815257de7d042\") " pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.226356 kubelet[2699]: I0416 00:49:44.226098 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f89b68dc5e1bf4aecabde7fa86e29410-kubeconfig\") pod \"kube-scheduler-srv-57yav.gb1.brightbox.com\" (UID: \"f89b68dc5e1bf4aecabde7fa86e29410\") " pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.226356 kubelet[2699]: I0416 00:49:44.226200 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c5ac22e30fa74c9cffa815257de7d042-ca-certs\") pod \"kube-apiserver-srv-57yav.gb1.brightbox.com\" (UID: \"c5ac22e30fa74c9cffa815257de7d042\") " pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.226356 kubelet[2699]: I0416 00:49:44.226305 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-ca-certs\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.226841 kubelet[2699]: I0416 00:49:44.226614 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-flexvolume-dir\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.226841 kubelet[2699]: I0416 00:49:44.226711 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-k8s-certs\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.226841 kubelet[2699]: I0416 00:49:44.226738 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-kubeconfig\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.228398 kubelet[2699]: I0416 00:49:44.226793 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fad9cbf33c68bf88f2e084379e9327e3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-57yav.gb1.brightbox.com\" (UID: \"fad9cbf33c68bf88f2e084379e9327e3\") " pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.228398 kubelet[2699]: I0416 00:49:44.228219 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c5ac22e30fa74c9cffa815257de7d042-k8s-certs\") pod \"kube-apiserver-srv-57yav.gb1.brightbox.com\" (UID: \"c5ac22e30fa74c9cffa815257de7d042\") " pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.265967 kubelet[2699]: I0416 00:49:44.265247 2699 kubelet_node_status.go:75] "Attempting to register node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.288040 kubelet[2699]: I0416 00:49:44.288001 2699 kubelet_node_status.go:124] "Node was previously registered" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.288468 kubelet[2699]: I0416 00:49:44.288449 2699 kubelet_node_status.go:78] "Successfully registered node" node="srv-57yav.gb1.brightbox.com" Apr 16 00:49:44.981840 kubelet[2699]: I0416 00:49:44.981752 2699 apiserver.go:52] "Watching apiserver" Apr 16 00:49:45.024009 kubelet[2699]: I0416 00:49:45.023913 2699 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 00:49:45.111044 kubelet[2699]: I0416 00:49:45.110665 2699 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:45.111811 kubelet[2699]: I0416 00:49:45.111398 2699 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:45.143504 kubelet[2699]: I0416 00:49:45.143467 2699 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 00:49:45.144530 kubelet[2699]: E0416 00:49:45.143743 2699 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-57yav.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" Apr 16 00:49:45.145322 kubelet[2699]: I0416 00:49:45.144704 2699 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Apr 16 00:49:45.145322 kubelet[2699]: E0416 00:49:45.144747 2699 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-57yav.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" Apr 16 00:49:45.176587 kubelet[2699]: I0416 00:49:45.176453 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-57yav.gb1.brightbox.com" podStartSLOduration=1.176424615 podStartE2EDuration="1.176424615s" podCreationTimestamp="2026-04-16 00:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:49:45.17635987 +0000 UTC m=+1.290216139" watchObservedRunningTime="2026-04-16 00:49:45.176424615 +0000 UTC m=+1.290280888" Apr 16 00:49:45.176895 kubelet[2699]: I0416 00:49:45.176616 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-57yav.gb1.brightbox.com" podStartSLOduration=1.17660931 podStartE2EDuration="1.17660931s" podCreationTimestamp="2026-04-16 00:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:49:45.153467945 +0000 UTC m=+1.267324214" watchObservedRunningTime="2026-04-16 00:49:45.17660931 +0000 UTC m=+1.290465568" Apr 16 00:49:49.335901 kubelet[2699]: I0416 00:49:49.335838 2699 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 00:49:49.337659 containerd[1500]: time="2026-04-16T00:49:49.337505889Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 00:49:49.338652 kubelet[2699]: I0416 00:49:49.337780 2699 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 00:49:50.017072 kubelet[2699]: I0416 00:49:50.016666 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-57yav.gb1.brightbox.com" podStartSLOduration=6.016633435 podStartE2EDuration="6.016633435s" podCreationTimestamp="2026-04-16 00:49:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:49:45.192808247 +0000 UTC m=+1.306664535" watchObservedRunningTime="2026-04-16 00:49:50.016633435 +0000 UTC m=+6.130489694" Apr 16 00:49:50.038055 systemd[1]: Created slice kubepods-besteffort-pod2e3fb3b9_b268_4d41_bbed_e501cfdb3551.slice - libcontainer container kubepods-besteffort-pod2e3fb3b9_b268_4d41_bbed_e501cfdb3551.slice. Apr 16 00:49:50.066175 kubelet[2699]: I0416 00:49:50.065860 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e3fb3b9-b268-4d41-bbed-e501cfdb3551-xtables-lock\") pod \"kube-proxy-2q6bn\" (UID: \"2e3fb3b9-b268-4d41-bbed-e501cfdb3551\") " pod="kube-system/kube-proxy-2q6bn" Apr 16 00:49:50.066175 kubelet[2699]: I0416 00:49:50.065973 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e3fb3b9-b268-4d41-bbed-e501cfdb3551-kube-proxy\") pod \"kube-proxy-2q6bn\" (UID: \"2e3fb3b9-b268-4d41-bbed-e501cfdb3551\") " pod="kube-system/kube-proxy-2q6bn" Apr 16 00:49:50.066175 kubelet[2699]: I0416 00:49:50.066006 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e3fb3b9-b268-4d41-bbed-e501cfdb3551-lib-modules\") pod \"kube-proxy-2q6bn\" (UID: \"2e3fb3b9-b268-4d41-bbed-e501cfdb3551\") " pod="kube-system/kube-proxy-2q6bn" Apr 16 00:49:50.066175 kubelet[2699]: I0416 00:49:50.066032 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg76z\" (UniqueName: \"kubernetes.io/projected/2e3fb3b9-b268-4d41-bbed-e501cfdb3551-kube-api-access-mg76z\") pod \"kube-proxy-2q6bn\" (UID: \"2e3fb3b9-b268-4d41-bbed-e501cfdb3551\") " pod="kube-system/kube-proxy-2q6bn" Apr 16 00:49:50.179786 kubelet[2699]: E0416 00:49:50.178884 2699 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 16 00:49:50.179786 kubelet[2699]: E0416 00:49:50.179008 2699 projected.go:196] Error preparing data for projected volume kube-api-access-mg76z for pod kube-system/kube-proxy-2q6bn: configmap "kube-root-ca.crt" not found Apr 16 00:49:50.179786 kubelet[2699]: E0416 00:49:50.179383 2699 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e3fb3b9-b268-4d41-bbed-e501cfdb3551-kube-api-access-mg76z podName:2e3fb3b9-b268-4d41-bbed-e501cfdb3551 nodeName:}" failed. No retries permitted until 2026-04-16 00:49:50.679148885 +0000 UTC m=+6.793005141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mg76z" (UniqueName: "kubernetes.io/projected/2e3fb3b9-b268-4d41-bbed-e501cfdb3551-kube-api-access-mg76z") pod "kube-proxy-2q6bn" (UID: "2e3fb3b9-b268-4d41-bbed-e501cfdb3551") : configmap "kube-root-ca.crt" not found Apr 16 00:49:50.509242 systemd[1]: Created slice kubepods-besteffort-pod0348182b_9783_46a7_9d11_8a5d193f611a.slice - libcontainer container kubepods-besteffort-pod0348182b_9783_46a7_9d11_8a5d193f611a.slice. Apr 16 00:49:50.569742 kubelet[2699]: I0416 00:49:50.569617 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0348182b-9783-46a7-9d11-8a5d193f611a-var-lib-calico\") pod \"tigera-operator-5588576f44-rmlz9\" (UID: \"0348182b-9783-46a7-9d11-8a5d193f611a\") " pod="tigera-operator/tigera-operator-5588576f44-rmlz9" Apr 16 00:49:50.570382 kubelet[2699]: I0416 00:49:50.569801 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77pjq\" (UniqueName: \"kubernetes.io/projected/0348182b-9783-46a7-9d11-8a5d193f611a-kube-api-access-77pjq\") pod \"tigera-operator-5588576f44-rmlz9\" (UID: \"0348182b-9783-46a7-9d11-8a5d193f611a\") " pod="tigera-operator/tigera-operator-5588576f44-rmlz9" Apr 16 00:49:50.831058 containerd[1500]: time="2026-04-16T00:49:50.830471871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-rmlz9,Uid:0348182b-9783-46a7-9d11-8a5d193f611a,Namespace:tigera-operator,Attempt:0,}" Apr 16 00:49:50.862995 containerd[1500]: time="2026-04-16T00:49:50.862803507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:49:50.863320 containerd[1500]: time="2026-04-16T00:49:50.862889139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:49:50.864185 containerd[1500]: time="2026-04-16T00:49:50.863952693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:50.864185 containerd[1500]: time="2026-04-16T00:49:50.864080527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:50.900250 systemd[1]: Started cri-containerd-4e4e9d18fea8b616e6dc6932974c8ee4cbd8dbcbd494f079933d680ec65a6485.scope - libcontainer container 4e4e9d18fea8b616e6dc6932974c8ee4cbd8dbcbd494f079933d680ec65a6485. Apr 16 00:49:50.958726 containerd[1500]: time="2026-04-16T00:49:50.958291288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2q6bn,Uid:2e3fb3b9-b268-4d41-bbed-e501cfdb3551,Namespace:kube-system,Attempt:0,}" Apr 16 00:49:50.963675 containerd[1500]: time="2026-04-16T00:49:50.963639974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-rmlz9,Uid:0348182b-9783-46a7-9d11-8a5d193f611a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4e4e9d18fea8b616e6dc6932974c8ee4cbd8dbcbd494f079933d680ec65a6485\"" Apr 16 00:49:50.967502 containerd[1500]: time="2026-04-16T00:49:50.967423436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 00:49:50.993181 containerd[1500]: time="2026-04-16T00:49:50.992997753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:49:50.995140 containerd[1500]: time="2026-04-16T00:49:50.995061764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:49:50.995248 containerd[1500]: time="2026-04-16T00:49:50.995114550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:50.995415 containerd[1500]: time="2026-04-16T00:49:50.995236107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:49:51.020632 systemd[1]: Started cri-containerd-379b4b2e7dad6fa8b6695c8dc4b81df9eda88190625ab57415b5504a003dbdd2.scope - libcontainer container 379b4b2e7dad6fa8b6695c8dc4b81df9eda88190625ab57415b5504a003dbdd2. Apr 16 00:49:51.062922 containerd[1500]: time="2026-04-16T00:49:51.062762484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2q6bn,Uid:2e3fb3b9-b268-4d41-bbed-e501cfdb3551,Namespace:kube-system,Attempt:0,} returns sandbox id \"379b4b2e7dad6fa8b6695c8dc4b81df9eda88190625ab57415b5504a003dbdd2\"" Apr 16 00:49:51.070162 containerd[1500]: time="2026-04-16T00:49:51.069524365Z" level=info msg="CreateContainer within sandbox \"379b4b2e7dad6fa8b6695c8dc4b81df9eda88190625ab57415b5504a003dbdd2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 00:49:51.095666 containerd[1500]: time="2026-04-16T00:49:51.095438104Z" level=info msg="CreateContainer within sandbox \"379b4b2e7dad6fa8b6695c8dc4b81df9eda88190625ab57415b5504a003dbdd2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"af9b6247a303a11f407f9d00819d407524dd9c2aca55dbeefaca04f507940088\"" Apr 16 00:49:51.098315 containerd[1500]: time="2026-04-16T00:49:51.098259002Z" level=info msg="StartContainer for \"af9b6247a303a11f407f9d00819d407524dd9c2aca55dbeefaca04f507940088\"" Apr 16 00:49:51.142177 systemd[1]: Started cri-containerd-af9b6247a303a11f407f9d00819d407524dd9c2aca55dbeefaca04f507940088.scope - libcontainer container af9b6247a303a11f407f9d00819d407524dd9c2aca55dbeefaca04f507940088. Apr 16 00:49:51.184050 containerd[1500]: time="2026-04-16T00:49:51.183968241Z" level=info msg="StartContainer for \"af9b6247a303a11f407f9d00819d407524dd9c2aca55dbeefaca04f507940088\" returns successfully" Apr 16 00:49:52.162041 kubelet[2699]: I0416 00:49:52.161920 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2q6bn" podStartSLOduration=2.1619020349999998 podStartE2EDuration="2.161902035s" podCreationTimestamp="2026-04-16 00:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:49:52.161460334 +0000 UTC m=+8.275316605" watchObservedRunningTime="2026-04-16 00:49:52.161902035 +0000 UTC m=+8.275758292" Apr 16 00:49:54.181600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989202044.mount: Deactivated successfully. Apr 16 00:49:56.090963 containerd[1500]: time="2026-04-16T00:49:56.089625724Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:56.091679 containerd[1500]: time="2026-04-16T00:49:56.091638574Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 00:49:56.091908 containerd[1500]: time="2026-04-16T00:49:56.091866166Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:56.096003 containerd[1500]: time="2026-04-16T00:49:56.095970214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:49:56.097109 containerd[1500]: time="2026-04-16T00:49:56.097054325Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 5.129539183s" Apr 16 00:49:56.097195 containerd[1500]: time="2026-04-16T00:49:56.097147169Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 00:49:56.105039 containerd[1500]: time="2026-04-16T00:49:56.104829701Z" level=info msg="CreateContainer within sandbox \"4e4e9d18fea8b616e6dc6932974c8ee4cbd8dbcbd494f079933d680ec65a6485\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 00:49:56.120703 containerd[1500]: time="2026-04-16T00:49:56.120661931Z" level=info msg="CreateContainer within sandbox \"4e4e9d18fea8b616e6dc6932974c8ee4cbd8dbcbd494f079933d680ec65a6485\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ebd026000c3feb5e6677c75706d6a0db4a8258afbc850b6fec33afc0411e702d\"" Apr 16 00:49:56.123037 containerd[1500]: time="2026-04-16T00:49:56.121904669Z" level=info msg="StartContainer for \"ebd026000c3feb5e6677c75706d6a0db4a8258afbc850b6fec33afc0411e702d\"" Apr 16 00:49:56.173262 systemd[1]: Started cri-containerd-ebd026000c3feb5e6677c75706d6a0db4a8258afbc850b6fec33afc0411e702d.scope - libcontainer container ebd026000c3feb5e6677c75706d6a0db4a8258afbc850b6fec33afc0411e702d. Apr 16 00:49:56.216062 containerd[1500]: time="2026-04-16T00:49:56.215721320Z" level=info msg="StartContainer for \"ebd026000c3feb5e6677c75706d6a0db4a8258afbc850b6fec33afc0411e702d\" returns successfully" Apr 16 00:49:57.532735 kubelet[2699]: I0416 00:49:57.532487 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-rmlz9" podStartSLOduration=2.3991682 podStartE2EDuration="7.532463631s" podCreationTimestamp="2026-04-16 00:49:50 +0000 UTC" firstStartedPulling="2026-04-16 00:49:50.965769349 +0000 UTC m=+7.079625592" lastFinishedPulling="2026-04-16 00:49:56.09906478 +0000 UTC m=+12.212921023" observedRunningTime="2026-04-16 00:49:57.185757464 +0000 UTC m=+13.299613735" watchObservedRunningTime="2026-04-16 00:49:57.532463631 +0000 UTC m=+13.646319888" Apr 16 00:50:01.590982 sudo[1762]: pam_unix(sudo:session): session closed for user root Apr 16 00:50:01.616525 sshd[1759]: pam_unix(sshd:session): session closed for user core Apr 16 00:50:01.626643 systemd[1]: sshd@8-10.230.47.154:22-20.229.252.112:46796.service: Deactivated successfully. Apr 16 00:50:01.631191 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 00:50:01.633006 systemd[1]: session-11.scope: Consumed 7.078s CPU time, 160.0M memory peak, 0B memory swap peak. Apr 16 00:50:01.635487 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Apr 16 00:50:01.638531 systemd-logind[1485]: Removed session 11. Apr 16 00:50:04.999532 systemd[1]: Created slice kubepods-besteffort-pod637d7d99_2bcd_41ef_8be7_610ffe748fda.slice - libcontainer container kubepods-besteffort-pod637d7d99_2bcd_41ef_8be7_610ffe748fda.slice. Apr 16 00:50:05.075516 kubelet[2699]: I0416 00:50:05.075445 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/637d7d99-2bcd-41ef-8be7-610ffe748fda-typha-certs\") pod \"calico-typha-5cbb76c76c-6j9qv\" (UID: \"637d7d99-2bcd-41ef-8be7-610ffe748fda\") " pod="calico-system/calico-typha-5cbb76c76c-6j9qv" Apr 16 00:50:05.076506 kubelet[2699]: I0416 00:50:05.076343 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc988\" (UniqueName: \"kubernetes.io/projected/637d7d99-2bcd-41ef-8be7-610ffe748fda-kube-api-access-tc988\") pod \"calico-typha-5cbb76c76c-6j9qv\" (UID: \"637d7d99-2bcd-41ef-8be7-610ffe748fda\") " pod="calico-system/calico-typha-5cbb76c76c-6j9qv" Apr 16 00:50:05.076506 kubelet[2699]: I0416 00:50:05.076420 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/637d7d99-2bcd-41ef-8be7-610ffe748fda-tigera-ca-bundle\") pod \"calico-typha-5cbb76c76c-6j9qv\" (UID: \"637d7d99-2bcd-41ef-8be7-610ffe748fda\") " pod="calico-system/calico-typha-5cbb76c76c-6j9qv" Apr 16 00:50:05.136509 systemd[1]: Created slice kubepods-besteffort-pod60307808_1d40_4c3b_b58e_554ca25f7c9e.slice - libcontainer container kubepods-besteffort-pod60307808_1d40_4c3b_b58e_554ca25f7c9e.slice. Apr 16 00:50:05.177722 kubelet[2699]: I0416 00:50:05.177618 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-sys-fs\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.177722 kubelet[2699]: I0416 00:50:05.177699 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-var-lib-calico\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178087 kubelet[2699]: I0416 00:50:05.177748 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60307808-1d40-4c3b-b58e-554ca25f7c9e-tigera-ca-bundle\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178087 kubelet[2699]: I0416 00:50:05.177803 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-nodeproc\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178087 kubelet[2699]: I0416 00:50:05.177832 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-bpffs\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178087 kubelet[2699]: I0416 00:50:05.177862 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-var-run-calico\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178087 kubelet[2699]: I0416 00:50:05.177889 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-cni-net-dir\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178298 kubelet[2699]: I0416 00:50:05.177915 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60307808-1d40-4c3b-b58e-554ca25f7c9e-node-certs\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178298 kubelet[2699]: I0416 00:50:05.177974 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-cni-bin-dir\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178298 kubelet[2699]: I0416 00:50:05.178004 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-flexvol-driver-host\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178298 kubelet[2699]: I0416 00:50:05.178054 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-policysync\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178298 kubelet[2699]: I0416 00:50:05.178086 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-xtables-lock\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178611 kubelet[2699]: I0416 00:50:05.178132 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2bc4\" (UniqueName: \"kubernetes.io/projected/60307808-1d40-4c3b-b58e-554ca25f7c9e-kube-api-access-r2bc4\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178611 kubelet[2699]: I0416 00:50:05.178189 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-cni-log-dir\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.178611 kubelet[2699]: I0416 00:50:05.178215 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60307808-1d40-4c3b-b58e-554ca25f7c9e-lib-modules\") pod \"calico-node-l9qfj\" (UID: \"60307808-1d40-4c3b-b58e-554ca25f7c9e\") " pod="calico-system/calico-node-l9qfj" Apr 16 00:50:05.238981 kubelet[2699]: E0416 00:50:05.238584 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:05.280069 kubelet[2699]: I0416 00:50:05.279085 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c74cad1-a379-4d46-b91e-dca9240bc056-socket-dir\") pod \"csi-node-driver-4f5m5\" (UID: \"2c74cad1-a379-4d46-b91e-dca9240bc056\") " pod="calico-system/csi-node-driver-4f5m5" Apr 16 00:50:05.280069 kubelet[2699]: I0416 00:50:05.279145 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgdjt\" (UniqueName: \"kubernetes.io/projected/2c74cad1-a379-4d46-b91e-dca9240bc056-kube-api-access-xgdjt\") pod \"csi-node-driver-4f5m5\" (UID: \"2c74cad1-a379-4d46-b91e-dca9240bc056\") " pod="calico-system/csi-node-driver-4f5m5" Apr 16 00:50:05.280069 kubelet[2699]: I0416 00:50:05.279325 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c74cad1-a379-4d46-b91e-dca9240bc056-kubelet-dir\") pod \"csi-node-driver-4f5m5\" (UID: \"2c74cad1-a379-4d46-b91e-dca9240bc056\") " pod="calico-system/csi-node-driver-4f5m5" Apr 16 00:50:05.280069 kubelet[2699]: I0416 00:50:05.279364 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2c74cad1-a379-4d46-b91e-dca9240bc056-varrun\") pod \"csi-node-driver-4f5m5\" (UID: \"2c74cad1-a379-4d46-b91e-dca9240bc056\") " pod="calico-system/csi-node-driver-4f5m5" Apr 16 00:50:05.280069 kubelet[2699]: I0416 00:50:05.279416 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c74cad1-a379-4d46-b91e-dca9240bc056-registration-dir\") pod \"csi-node-driver-4f5m5\" (UID: \"2c74cad1-a379-4d46-b91e-dca9240bc056\") " pod="calico-system/csi-node-driver-4f5m5" Apr 16 00:50:05.286509 kubelet[2699]: E0416 00:50:05.286467 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.286509 kubelet[2699]: W0416 00:50:05.286506 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.286695 kubelet[2699]: E0416 00:50:05.286669 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.288666 kubelet[2699]: E0416 00:50:05.288495 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.288666 kubelet[2699]: W0416 00:50:05.288634 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.288666 kubelet[2699]: E0416 00:50:05.288655 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.289951 kubelet[2699]: E0416 00:50:05.289069 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.289951 kubelet[2699]: W0416 00:50:05.289090 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.289951 kubelet[2699]: E0416 00:50:05.289127 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.289951 kubelet[2699]: E0416 00:50:05.289471 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.289951 kubelet[2699]: W0416 00:50:05.289485 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.289951 kubelet[2699]: E0416 00:50:05.289500 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.290570 kubelet[2699]: E0416 00:50:05.290533 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.290570 kubelet[2699]: W0416 00:50:05.290570 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.290701 kubelet[2699]: E0416 00:50:05.290587 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.290942 kubelet[2699]: E0416 00:50:05.290900 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.291017 kubelet[2699]: W0416 00:50:05.290922 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.291017 kubelet[2699]: E0416 00:50:05.290968 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.295957 kubelet[2699]: E0416 00:50:05.295319 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.295957 kubelet[2699]: W0416 00:50:05.295354 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.295957 kubelet[2699]: E0416 00:50:05.295479 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.295957 kubelet[2699]: E0416 00:50:05.295868 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.295957 kubelet[2699]: W0416 00:50:05.295883 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.299990 kubelet[2699]: E0416 00:50:05.297996 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.300311 kubelet[2699]: E0416 00:50:05.300259 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.300311 kubelet[2699]: W0416 00:50:05.300310 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.300411 kubelet[2699]: E0416 00:50:05.300329 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.303566 kubelet[2699]: E0416 00:50:05.303536 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.303566 kubelet[2699]: W0416 00:50:05.303563 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.303691 kubelet[2699]: E0416 00:50:05.303582 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.308429 kubelet[2699]: E0416 00:50:05.308366 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.308429 kubelet[2699]: W0416 00:50:05.308395 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.308429 kubelet[2699]: E0416 00:50:05.308413 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.308992 kubelet[2699]: E0416 00:50:05.308675 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.308992 kubelet[2699]: W0416 00:50:05.308707 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.308992 kubelet[2699]: E0416 00:50:05.308721 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.309972 kubelet[2699]: E0416 00:50:05.309718 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.309972 kubelet[2699]: W0416 00:50:05.309744 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.309972 kubelet[2699]: E0416 00:50:05.309774 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.310172 kubelet[2699]: E0416 00:50:05.310141 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.310172 kubelet[2699]: W0416 00:50:05.310161 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.310275 kubelet[2699]: E0416 00:50:05.310176 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.312954 kubelet[2699]: E0416 00:50:05.312752 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.312954 kubelet[2699]: W0416 00:50:05.312774 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.312954 kubelet[2699]: E0416 00:50:05.312790 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.316060 kubelet[2699]: E0416 00:50:05.316010 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.316060 kubelet[2699]: W0416 00:50:05.316041 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.316060 kubelet[2699]: E0416 00:50:05.316061 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.316898 kubelet[2699]: E0416 00:50:05.316871 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.316898 kubelet[2699]: W0416 00:50:05.316892 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.318258 kubelet[2699]: E0416 00:50:05.316907 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.318258 kubelet[2699]: E0416 00:50:05.317274 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.318258 kubelet[2699]: W0416 00:50:05.317287 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.318258 kubelet[2699]: E0416 00:50:05.317302 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.320199 containerd[1500]: time="2026-04-16T00:50:05.319314768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cbb76c76c-6j9qv,Uid:637d7d99-2bcd-41ef-8be7-610ffe748fda,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:05.320588 kubelet[2699]: E0416 00:50:05.320423 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.320588 kubelet[2699]: W0416 00:50:05.320438 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.320684 kubelet[2699]: E0416 00:50:05.320655 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.323165 kubelet[2699]: E0416 00:50:05.321760 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.323165 kubelet[2699]: W0416 00:50:05.321780 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.323165 kubelet[2699]: E0416 00:50:05.321993 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.323165 kubelet[2699]: E0416 00:50:05.322970 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.323165 kubelet[2699]: W0416 00:50:05.322984 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.323165 kubelet[2699]: E0416 00:50:05.322998 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.323985 kubelet[2699]: E0416 00:50:05.323784 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.323985 kubelet[2699]: W0416 00:50:05.323804 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.323985 kubelet[2699]: E0416 00:50:05.323819 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.325083 kubelet[2699]: E0416 00:50:05.324509 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.325083 kubelet[2699]: W0416 00:50:05.324522 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.325083 kubelet[2699]: E0416 00:50:05.324539 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.381310 kubelet[2699]: E0416 00:50:05.380279 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.381310 kubelet[2699]: W0416 00:50:05.380306 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.381310 kubelet[2699]: E0416 00:50:05.380329 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.381310 kubelet[2699]: E0416 00:50:05.380606 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.381310 kubelet[2699]: W0416 00:50:05.380619 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.381310 kubelet[2699]: E0416 00:50:05.380634 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.381310 kubelet[2699]: E0416 00:50:05.380894 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.381310 kubelet[2699]: W0416 00:50:05.380912 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.381310 kubelet[2699]: E0416 00:50:05.380946 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.382186 kubelet[2699]: E0416 00:50:05.381309 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.382186 kubelet[2699]: W0416 00:50:05.381364 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.382186 kubelet[2699]: E0416 00:50:05.381379 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.382186 kubelet[2699]: E0416 00:50:05.381698 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.382186 kubelet[2699]: W0416 00:50:05.381712 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.382186 kubelet[2699]: E0416 00:50:05.381726 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.382186 kubelet[2699]: E0416 00:50:05.382021 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.382186 kubelet[2699]: W0416 00:50:05.382049 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.382186 kubelet[2699]: E0416 00:50:05.382065 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.384957 kubelet[2699]: E0416 00:50:05.384174 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.384957 kubelet[2699]: W0416 00:50:05.384207 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.384957 kubelet[2699]: E0416 00:50:05.384225 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.384957 kubelet[2699]: E0416 00:50:05.384681 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.384957 kubelet[2699]: W0416 00:50:05.384726 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.384957 kubelet[2699]: E0416 00:50:05.384744 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.388960 kubelet[2699]: E0416 00:50:05.388901 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.388960 kubelet[2699]: W0416 00:50:05.388922 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.390443 kubelet[2699]: E0416 00:50:05.388971 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.390443 kubelet[2699]: E0416 00:50:05.389554 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.390443 kubelet[2699]: W0416 00:50:05.389569 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.390443 kubelet[2699]: E0416 00:50:05.389584 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.390443 kubelet[2699]: E0416 00:50:05.390422 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.390443 kubelet[2699]: W0416 00:50:05.390436 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.390443 kubelet[2699]: E0416 00:50:05.390451 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.390985 kubelet[2699]: E0416 00:50:05.390955 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.390985 kubelet[2699]: W0416 00:50:05.390982 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.391151 kubelet[2699]: E0416 00:50:05.391001 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.392880 kubelet[2699]: E0416 00:50:05.391969 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.392880 kubelet[2699]: W0416 00:50:05.391990 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.392880 kubelet[2699]: E0416 00:50:05.392009 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.392880 kubelet[2699]: E0416 00:50:05.392347 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.392880 kubelet[2699]: W0416 00:50:05.392360 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.392880 kubelet[2699]: E0416 00:50:05.392375 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.394171 kubelet[2699]: E0416 00:50:05.393489 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.394171 kubelet[2699]: W0416 00:50:05.393504 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.394171 kubelet[2699]: E0416 00:50:05.393519 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.397983 kubelet[2699]: E0416 00:50:05.396992 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.397983 kubelet[2699]: W0416 00:50:05.397016 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.397983 kubelet[2699]: E0416 00:50:05.397044 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.399083 kubelet[2699]: E0416 00:50:05.399018 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.399083 kubelet[2699]: W0416 00:50:05.399048 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.399083 kubelet[2699]: E0416 00:50:05.399066 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.400168 kubelet[2699]: E0416 00:50:05.399417 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.400168 kubelet[2699]: W0416 00:50:05.399436 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.400168 kubelet[2699]: E0416 00:50:05.399451 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.400168 kubelet[2699]: E0416 00:50:05.399820 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.400168 kubelet[2699]: W0416 00:50:05.399833 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.400168 kubelet[2699]: E0416 00:50:05.399847 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.402979 kubelet[2699]: E0416 00:50:05.402712 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.402979 kubelet[2699]: W0416 00:50:05.402732 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.402979 kubelet[2699]: E0416 00:50:05.402749 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.403144 kubelet[2699]: E0416 00:50:05.403120 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.403144 kubelet[2699]: W0416 00:50:05.403134 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.403144 kubelet[2699]: E0416 00:50:05.403149 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.404948 kubelet[2699]: E0416 00:50:05.403653 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.404948 kubelet[2699]: W0416 00:50:05.403671 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.404948 kubelet[2699]: E0416 00:50:05.403688 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.405152 kubelet[2699]: E0416 00:50:05.405019 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.405152 kubelet[2699]: W0416 00:50:05.405044 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.405152 kubelet[2699]: E0416 00:50:05.405060 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.406374 kubelet[2699]: E0416 00:50:05.405888 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.406374 kubelet[2699]: W0416 00:50:05.405909 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.406374 kubelet[2699]: E0416 00:50:05.405942 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.410815 kubelet[2699]: E0416 00:50:05.410022 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.410815 kubelet[2699]: W0416 00:50:05.410053 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.410815 kubelet[2699]: E0416 00:50:05.410071 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.428737 kubelet[2699]: E0416 00:50:05.428674 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:05.428737 kubelet[2699]: W0416 00:50:05.428727 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:05.429048 kubelet[2699]: E0416 00:50:05.428758 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:05.447228 containerd[1500]: time="2026-04-16T00:50:05.446997820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9qfj,Uid:60307808-1d40-4c3b-b58e-554ca25f7c9e,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:05.448803 containerd[1500]: time="2026-04-16T00:50:05.448692063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:05.449212 containerd[1500]: time="2026-04-16T00:50:05.449092856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:05.449212 containerd[1500]: time="2026-04-16T00:50:05.449173990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:05.450203 containerd[1500]: time="2026-04-16T00:50:05.450148598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:05.488870 systemd[1]: Started cri-containerd-e8b2ac40721a4340960f22f05600fe6b99036e7efe4d3e506d70e59bd4881964.scope - libcontainer container e8b2ac40721a4340960f22f05600fe6b99036e7efe4d3e506d70e59bd4881964. Apr 16 00:50:05.514391 containerd[1500]: time="2026-04-16T00:50:05.514209016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:05.515852 containerd[1500]: time="2026-04-16T00:50:05.514361683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:05.515852 containerd[1500]: time="2026-04-16T00:50:05.514810277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:05.515852 containerd[1500]: time="2026-04-16T00:50:05.515377512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:05.560786 systemd[1]: Started cri-containerd-224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee.scope - libcontainer container 224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee. Apr 16 00:50:05.609262 containerd[1500]: time="2026-04-16T00:50:05.609114052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cbb76c76c-6j9qv,Uid:637d7d99-2bcd-41ef-8be7-610ffe748fda,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8b2ac40721a4340960f22f05600fe6b99036e7efe4d3e506d70e59bd4881964\"" Apr 16 00:50:05.612585 containerd[1500]: time="2026-04-16T00:50:05.612425425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 00:50:05.720050 containerd[1500]: time="2026-04-16T00:50:05.719976826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9qfj,Uid:60307808-1d40-4c3b-b58e-554ca25f7c9e,Namespace:calico-system,Attempt:0,} returns sandbox id \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\"" Apr 16 00:50:07.072791 kubelet[2699]: E0416 00:50:07.071671 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:07.564774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281694969.mount: Deactivated successfully. Apr 16 00:50:09.073343 kubelet[2699]: E0416 00:50:09.071824 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:09.862341 containerd[1500]: time="2026-04-16T00:50:09.862230737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:09.863960 containerd[1500]: time="2026-04-16T00:50:09.863800242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 16 00:50:09.865072 containerd[1500]: time="2026-04-16T00:50:09.865009468Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:09.868467 containerd[1500]: time="2026-04-16T00:50:09.868128782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:09.869156 containerd[1500]: time="2026-04-16T00:50:09.869109916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 4.256639111s" Apr 16 00:50:09.869244 containerd[1500]: time="2026-04-16T00:50:09.869156216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 16 00:50:09.871490 containerd[1500]: time="2026-04-16T00:50:09.871424241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 00:50:09.903031 containerd[1500]: time="2026-04-16T00:50:09.902712019Z" level=info msg="CreateContainer within sandbox \"e8b2ac40721a4340960f22f05600fe6b99036e7efe4d3e506d70e59bd4881964\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 00:50:09.924096 containerd[1500]: time="2026-04-16T00:50:09.923881780Z" level=info msg="CreateContainer within sandbox \"e8b2ac40721a4340960f22f05600fe6b99036e7efe4d3e506d70e59bd4881964\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"63eeec8ca41355212a48b56faf5d8af4fb032576571e541c5ba331b833ee14b0\"" Apr 16 00:50:09.928192 containerd[1500]: time="2026-04-16T00:50:09.927091048Z" level=info msg="StartContainer for \"63eeec8ca41355212a48b56faf5d8af4fb032576571e541c5ba331b833ee14b0\"" Apr 16 00:50:09.997333 systemd[1]: Started cri-containerd-63eeec8ca41355212a48b56faf5d8af4fb032576571e541c5ba331b833ee14b0.scope - libcontainer container 63eeec8ca41355212a48b56faf5d8af4fb032576571e541c5ba331b833ee14b0. Apr 16 00:50:10.064470 containerd[1500]: time="2026-04-16T00:50:10.064263172Z" level=info msg="StartContainer for \"63eeec8ca41355212a48b56faf5d8af4fb032576571e541c5ba331b833ee14b0\" returns successfully" Apr 16 00:50:10.286557 kubelet[2699]: I0416 00:50:10.285557 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cbb76c76c-6j9qv" podStartSLOduration=2.027129061 podStartE2EDuration="6.285530322s" podCreationTimestamp="2026-04-16 00:50:04 +0000 UTC" firstStartedPulling="2026-04-16 00:50:05.61211689 +0000 UTC m=+21.725973140" lastFinishedPulling="2026-04-16 00:50:09.870518143 +0000 UTC m=+25.984374401" observedRunningTime="2026-04-16 00:50:10.284489464 +0000 UTC m=+26.398345737" watchObservedRunningTime="2026-04-16 00:50:10.285530322 +0000 UTC m=+26.399386579" Apr 16 00:50:10.296224 kubelet[2699]: E0416 00:50:10.295349 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.296224 kubelet[2699]: W0416 00:50:10.295405 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.296224 kubelet[2699]: E0416 00:50:10.295449 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.300832 kubelet[2699]: E0416 00:50:10.299511 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.300832 kubelet[2699]: W0416 00:50:10.299563 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.300832 kubelet[2699]: E0416 00:50:10.299608 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.300832 kubelet[2699]: E0416 00:50:10.300561 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.300832 kubelet[2699]: W0416 00:50:10.300578 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.300832 kubelet[2699]: E0416 00:50:10.300595 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.305127 kubelet[2699]: E0416 00:50:10.305066 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.305418 kubelet[2699]: W0416 00:50:10.305354 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.305982 kubelet[2699]: E0416 00:50:10.305646 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.309850 kubelet[2699]: E0416 00:50:10.309524 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.309850 kubelet[2699]: W0416 00:50:10.309584 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.309850 kubelet[2699]: E0416 00:50:10.309628 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.312408 kubelet[2699]: E0416 00:50:10.310627 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.312408 kubelet[2699]: W0416 00:50:10.311533 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.312408 kubelet[2699]: E0416 00:50:10.311581 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.315002 kubelet[2699]: E0416 00:50:10.314487 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.315002 kubelet[2699]: W0416 00:50:10.314621 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.315002 kubelet[2699]: E0416 00:50:10.314673 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.318471 kubelet[2699]: E0416 00:50:10.317688 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.318471 kubelet[2699]: W0416 00:50:10.317735 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.318471 kubelet[2699]: E0416 00:50:10.317778 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.321212 kubelet[2699]: E0416 00:50:10.320835 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.321212 kubelet[2699]: W0416 00:50:10.320881 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.321212 kubelet[2699]: E0416 00:50:10.320922 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.323477 kubelet[2699]: E0416 00:50:10.323252 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.323477 kubelet[2699]: W0416 00:50:10.323469 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.323602 kubelet[2699]: E0416 00:50:10.323506 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.324682 kubelet[2699]: E0416 00:50:10.324015 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.324682 kubelet[2699]: W0416 00:50:10.324039 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.324682 kubelet[2699]: E0416 00:50:10.324057 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.324682 kubelet[2699]: E0416 00:50:10.324499 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.324682 kubelet[2699]: W0416 00:50:10.324514 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.324682 kubelet[2699]: E0416 00:50:10.324529 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.326217 kubelet[2699]: E0416 00:50:10.326179 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.326217 kubelet[2699]: W0416 00:50:10.326214 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.326331 kubelet[2699]: E0416 00:50:10.326250 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.326621 kubelet[2699]: E0416 00:50:10.326593 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.326621 kubelet[2699]: W0416 00:50:10.326617 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.326727 kubelet[2699]: E0416 00:50:10.326633 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.328125 kubelet[2699]: E0416 00:50:10.328086 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.328125 kubelet[2699]: W0416 00:50:10.328118 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.328250 kubelet[2699]: E0416 00:50:10.328143 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.328961 kubelet[2699]: E0416 00:50:10.328762 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.328961 kubelet[2699]: W0416 00:50:10.328785 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.328961 kubelet[2699]: E0416 00:50:10.328802 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.330998 kubelet[2699]: E0416 00:50:10.329188 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.330998 kubelet[2699]: W0416 00:50:10.329209 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.330998 kubelet[2699]: E0416 00:50:10.329225 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.330998 kubelet[2699]: E0416 00:50:10.329572 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.330998 kubelet[2699]: W0416 00:50:10.329585 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.330998 kubelet[2699]: E0416 00:50:10.329601 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.330998 kubelet[2699]: E0416 00:50:10.330040 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.330998 kubelet[2699]: W0416 00:50:10.330066 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.330998 kubelet[2699]: E0416 00:50:10.330085 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.332450 kubelet[2699]: E0416 00:50:10.332391 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.332450 kubelet[2699]: W0416 00:50:10.332430 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.332573 kubelet[2699]: E0416 00:50:10.332460 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.333128 kubelet[2699]: E0416 00:50:10.333100 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.333128 kubelet[2699]: W0416 00:50:10.333123 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.333252 kubelet[2699]: E0416 00:50:10.333139 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.333583 kubelet[2699]: E0416 00:50:10.333556 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.333583 kubelet[2699]: W0416 00:50:10.333579 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.333583 kubelet[2699]: E0416 00:50:10.333595 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.334139 kubelet[2699]: E0416 00:50:10.334051 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.334139 kubelet[2699]: W0416 00:50:10.334066 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.334139 kubelet[2699]: E0416 00:50:10.334081 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.334432 kubelet[2699]: E0416 00:50:10.334405 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.334432 kubelet[2699]: W0416 00:50:10.334427 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.334558 kubelet[2699]: E0416 00:50:10.334442 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.334829 kubelet[2699]: E0416 00:50:10.334731 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.334829 kubelet[2699]: W0416 00:50:10.334752 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.334829 kubelet[2699]: E0416 00:50:10.334770 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.336513 kubelet[2699]: E0416 00:50:10.336485 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.336513 kubelet[2699]: W0416 00:50:10.336509 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.336666 kubelet[2699]: E0416 00:50:10.336532 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.337954 kubelet[2699]: E0416 00:50:10.337543 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.337954 kubelet[2699]: W0416 00:50:10.337564 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.337954 kubelet[2699]: E0416 00:50:10.337579 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.338277 kubelet[2699]: E0416 00:50:10.338248 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.338277 kubelet[2699]: W0416 00:50:10.338271 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.338382 kubelet[2699]: E0416 00:50:10.338288 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.339316 kubelet[2699]: E0416 00:50:10.339289 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.339316 kubelet[2699]: W0416 00:50:10.339311 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.339452 kubelet[2699]: E0416 00:50:10.339327 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.340291 kubelet[2699]: E0416 00:50:10.339992 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.340291 kubelet[2699]: W0416 00:50:10.340013 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.340291 kubelet[2699]: E0416 00:50:10.340030 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.340827 kubelet[2699]: E0416 00:50:10.340797 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.340905 kubelet[2699]: W0416 00:50:10.340832 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.340905 kubelet[2699]: E0416 00:50:10.340851 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.342062 kubelet[2699]: E0416 00:50:10.341317 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.342062 kubelet[2699]: W0416 00:50:10.341332 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.342062 kubelet[2699]: E0416 00:50:10.341346 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:10.343744 kubelet[2699]: E0416 00:50:10.343697 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:10.343744 kubelet[2699]: W0416 00:50:10.343728 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:10.343870 kubelet[2699]: E0416 00:50:10.343749 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.070924 kubelet[2699]: E0416 00:50:11.070790 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:11.235199 kubelet[2699]: I0416 00:50:11.235151 2699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 00:50:11.336849 kubelet[2699]: E0416 00:50:11.336096 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.336849 kubelet[2699]: W0416 00:50:11.336141 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.336849 kubelet[2699]: E0416 00:50:11.336184 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.338167 kubelet[2699]: E0416 00:50:11.337706 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.338167 kubelet[2699]: W0416 00:50:11.337724 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.338167 kubelet[2699]: E0416 00:50:11.337746 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.338167 kubelet[2699]: E0416 00:50:11.338073 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.338167 kubelet[2699]: W0416 00:50:11.338087 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.338167 kubelet[2699]: E0416 00:50:11.338102 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.339108 kubelet[2699]: E0416 00:50:11.338355 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.339108 kubelet[2699]: W0416 00:50:11.338368 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.339108 kubelet[2699]: E0416 00:50:11.338382 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.339108 kubelet[2699]: E0416 00:50:11.338682 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.339108 kubelet[2699]: W0416 00:50:11.338699 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.339108 kubelet[2699]: E0416 00:50:11.338717 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.339108 kubelet[2699]: E0416 00:50:11.339005 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.339108 kubelet[2699]: W0416 00:50:11.339018 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.339108 kubelet[2699]: E0416 00:50:11.339032 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.341680 kubelet[2699]: E0416 00:50:11.339292 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.341680 kubelet[2699]: W0416 00:50:11.339305 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.341680 kubelet[2699]: E0416 00:50:11.339320 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.341680 kubelet[2699]: E0416 00:50:11.339553 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.341680 kubelet[2699]: W0416 00:50:11.339565 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.341680 kubelet[2699]: E0416 00:50:11.339583 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.341680 kubelet[2699]: E0416 00:50:11.339857 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.341680 kubelet[2699]: W0416 00:50:11.339870 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.341680 kubelet[2699]: E0416 00:50:11.339884 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.341680 kubelet[2699]: E0416 00:50:11.340177 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.343652 kubelet[2699]: W0416 00:50:11.340190 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.343652 kubelet[2699]: E0416 00:50:11.340203 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.343652 kubelet[2699]: E0416 00:50:11.340478 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.343652 kubelet[2699]: W0416 00:50:11.340491 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.343652 kubelet[2699]: E0416 00:50:11.340504 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.343652 kubelet[2699]: E0416 00:50:11.340772 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.343652 kubelet[2699]: W0416 00:50:11.340784 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.343652 kubelet[2699]: E0416 00:50:11.340800 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.343652 kubelet[2699]: E0416 00:50:11.341214 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.343652 kubelet[2699]: W0416 00:50:11.341227 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.344113 kubelet[2699]: E0416 00:50:11.341248 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.344113 kubelet[2699]: E0416 00:50:11.341488 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.344113 kubelet[2699]: W0416 00:50:11.341501 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.344113 kubelet[2699]: E0416 00:50:11.341514 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.344113 kubelet[2699]: E0416 00:50:11.341752 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.344113 kubelet[2699]: W0416 00:50:11.341768 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.344113 kubelet[2699]: E0416 00:50:11.341781 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.344113 kubelet[2699]: E0416 00:50:11.342131 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.344113 kubelet[2699]: W0416 00:50:11.342145 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.344113 kubelet[2699]: E0416 00:50:11.342158 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346216 kubelet[2699]: E0416 00:50:11.342479 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346216 kubelet[2699]: W0416 00:50:11.342492 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346216 kubelet[2699]: E0416 00:50:11.342514 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346216 kubelet[2699]: E0416 00:50:11.342793 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346216 kubelet[2699]: W0416 00:50:11.342806 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346216 kubelet[2699]: E0416 00:50:11.342819 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346216 kubelet[2699]: E0416 00:50:11.343171 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346216 kubelet[2699]: W0416 00:50:11.343184 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346216 kubelet[2699]: E0416 00:50:11.343198 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346216 kubelet[2699]: E0416 00:50:11.343429 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346597 kubelet[2699]: W0416 00:50:11.343444 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346597 kubelet[2699]: E0416 00:50:11.343458 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346597 kubelet[2699]: E0416 00:50:11.343712 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346597 kubelet[2699]: W0416 00:50:11.343724 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346597 kubelet[2699]: E0416 00:50:11.343737 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346597 kubelet[2699]: E0416 00:50:11.344008 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346597 kubelet[2699]: W0416 00:50:11.344020 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346597 kubelet[2699]: E0416 00:50:11.344034 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346597 kubelet[2699]: E0416 00:50:11.344502 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346597 kubelet[2699]: W0416 00:50:11.344516 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346993 kubelet[2699]: E0416 00:50:11.344551 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346993 kubelet[2699]: E0416 00:50:11.344826 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346993 kubelet[2699]: W0416 00:50:11.344839 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346993 kubelet[2699]: E0416 00:50:11.344852 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346993 kubelet[2699]: E0416 00:50:11.345151 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346993 kubelet[2699]: W0416 00:50:11.345164 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346993 kubelet[2699]: E0416 00:50:11.345177 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.346993 kubelet[2699]: E0416 00:50:11.345397 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.346993 kubelet[2699]: W0416 00:50:11.345409 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.346993 kubelet[2699]: E0416 00:50:11.345423 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.347429 kubelet[2699]: E0416 00:50:11.345668 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.347429 kubelet[2699]: W0416 00:50:11.345681 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.347429 kubelet[2699]: E0416 00:50:11.345694 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.347429 kubelet[2699]: E0416 00:50:11.346234 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.347429 kubelet[2699]: W0416 00:50:11.346248 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.347429 kubelet[2699]: E0416 00:50:11.346262 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.347429 kubelet[2699]: E0416 00:50:11.346559 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.347429 kubelet[2699]: W0416 00:50:11.346572 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.347429 kubelet[2699]: E0416 00:50:11.346586 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.347429 kubelet[2699]: E0416 00:50:11.346817 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.347778 kubelet[2699]: W0416 00:50:11.346829 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.347778 kubelet[2699]: E0416 00:50:11.346843 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.347778 kubelet[2699]: E0416 00:50:11.347176 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.347778 kubelet[2699]: W0416 00:50:11.347189 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.347778 kubelet[2699]: E0416 00:50:11.347202 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.354527 kubelet[2699]: E0416 00:50:11.354317 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.354527 kubelet[2699]: W0416 00:50:11.354461 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.354527 kubelet[2699]: E0416 00:50:11.354492 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.356676 kubelet[2699]: E0416 00:50:11.356406 2699 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 00:50:11.356676 kubelet[2699]: W0416 00:50:11.356523 2699 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 00:50:11.356676 kubelet[2699]: E0416 00:50:11.356558 2699 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 00:50:11.590854 containerd[1500]: time="2026-04-16T00:50:11.590641182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:11.594481 containerd[1500]: time="2026-04-16T00:50:11.594423923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 16 00:50:11.596414 containerd[1500]: time="2026-04-16T00:50:11.595760110Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:11.612185 containerd[1500]: time="2026-04-16T00:50:11.612132067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:11.613146 containerd[1500]: time="2026-04-16T00:50:11.613109235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.74164022s" Apr 16 00:50:11.613803 containerd[1500]: time="2026-04-16T00:50:11.613769980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 00:50:11.620831 containerd[1500]: time="2026-04-16T00:50:11.620764695Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 00:50:11.638010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050987676.mount: Deactivated successfully. Apr 16 00:50:11.650530 containerd[1500]: time="2026-04-16T00:50:11.650325519Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823\"" Apr 16 00:50:11.651278 containerd[1500]: time="2026-04-16T00:50:11.651231062Z" level=info msg="StartContainer for \"aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823\"" Apr 16 00:50:11.709225 systemd[1]: Started cri-containerd-aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823.scope - libcontainer container aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823. Apr 16 00:50:11.760000 containerd[1500]: time="2026-04-16T00:50:11.757558903Z" level=info msg="StartContainer for \"aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823\" returns successfully" Apr 16 00:50:11.780791 systemd[1]: cri-containerd-aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823.scope: Deactivated successfully. Apr 16 00:50:11.997746 containerd[1500]: time="2026-04-16T00:50:11.997589045Z" level=info msg="shim disconnected" id=aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823 namespace=k8s.io Apr 16 00:50:11.997746 containerd[1500]: time="2026-04-16T00:50:11.997715269Z" level=warning msg="cleaning up after shim disconnected" id=aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823 namespace=k8s.io Apr 16 00:50:11.997746 containerd[1500]: time="2026-04-16T00:50:11.997741007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:50:12.242983 containerd[1500]: time="2026-04-16T00:50:12.242233050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 00:50:12.633184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aac9d80ac22a3edf9ff20a328541ebc1471517c89c74b1f71c0338cdae9be823-rootfs.mount: Deactivated successfully. Apr 16 00:50:13.071068 kubelet[2699]: E0416 00:50:13.070880 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:15.070577 kubelet[2699]: E0416 00:50:15.070511 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:17.071425 kubelet[2699]: E0416 00:50:17.071344 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:19.071127 kubelet[2699]: E0416 00:50:19.070986 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:21.071789 kubelet[2699]: E0416 00:50:21.070994 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:23.072064 kubelet[2699]: E0416 00:50:23.070531 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:24.181210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount257320878.mount: Deactivated successfully. Apr 16 00:50:24.255686 containerd[1500]: time="2026-04-16T00:50:24.251623205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 00:50:24.256508 containerd[1500]: time="2026-04-16T00:50:24.247249193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:24.258329 containerd[1500]: time="2026-04-16T00:50:24.258288715Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:24.259424 containerd[1500]: time="2026-04-16T00:50:24.259378871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 12.01707829s" Apr 16 00:50:24.259505 containerd[1500]: time="2026-04-16T00:50:24.259433339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 00:50:24.262904 containerd[1500]: time="2026-04-16T00:50:24.262867621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:24.269662 containerd[1500]: time="2026-04-16T00:50:24.269566990Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 00:50:24.344611 containerd[1500]: time="2026-04-16T00:50:24.344551590Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b\"" Apr 16 00:50:24.349206 containerd[1500]: time="2026-04-16T00:50:24.345786502Z" level=info msg="StartContainer for \"f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b\"" Apr 16 00:50:24.348150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275633610.mount: Deactivated successfully. Apr 16 00:50:24.498367 systemd[1]: Started cri-containerd-f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b.scope - libcontainer container f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b. Apr 16 00:50:24.552949 containerd[1500]: time="2026-04-16T00:50:24.552846108Z" level=info msg="StartContainer for \"f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b\" returns successfully" Apr 16 00:50:24.717418 systemd[1]: cri-containerd-f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b.scope: Deactivated successfully. Apr 16 00:50:24.945758 containerd[1500]: time="2026-04-16T00:50:24.943450970Z" level=info msg="shim disconnected" id=f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b namespace=k8s.io Apr 16 00:50:24.945758 containerd[1500]: time="2026-04-16T00:50:24.945542492Z" level=warning msg="cleaning up after shim disconnected" id=f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b namespace=k8s.io Apr 16 00:50:24.945758 containerd[1500]: time="2026-04-16T00:50:24.945559652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:50:25.072031 kubelet[2699]: E0416 00:50:25.070977 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:25.179070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f5ed1585c8d929dea0bab9a27ea0b41773c5d2de97b98398b3c7e14e160f6b-rootfs.mount: Deactivated successfully. Apr 16 00:50:25.296306 containerd[1500]: time="2026-04-16T00:50:25.295810266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 00:50:27.071510 kubelet[2699]: E0416 00:50:27.071396 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:29.071909 kubelet[2699]: E0416 00:50:29.071832 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:30.802986 containerd[1500]: time="2026-04-16T00:50:30.802158196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:30.805899 containerd[1500]: time="2026-04-16T00:50:30.805851137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 00:50:30.807576 containerd[1500]: time="2026-04-16T00:50:30.807088053Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:30.810412 containerd[1500]: time="2026-04-16T00:50:30.810365701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:30.811689 containerd[1500]: time="2026-04-16T00:50:30.811656095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.515785202s" Apr 16 00:50:30.811814 containerd[1500]: time="2026-04-16T00:50:30.811789761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 00:50:30.818895 containerd[1500]: time="2026-04-16T00:50:30.818856248Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 00:50:30.840246 containerd[1500]: time="2026-04-16T00:50:30.840121899Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145\"" Apr 16 00:50:30.841599 containerd[1500]: time="2026-04-16T00:50:30.841538863Z" level=info msg="StartContainer for \"809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145\"" Apr 16 00:50:30.907292 systemd[1]: Started cri-containerd-809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145.scope - libcontainer container 809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145. Apr 16 00:50:31.016259 containerd[1500]: time="2026-04-16T00:50:31.016041042Z" level=info msg="StartContainer for \"809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145\" returns successfully" Apr 16 00:50:31.071303 kubelet[2699]: E0416 00:50:31.070554 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:32.122592 systemd[1]: cri-containerd-809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145.scope: Deactivated successfully. Apr 16 00:50:32.161988 containerd[1500]: time="2026-04-16T00:50:32.161257293Z" level=info msg="shim disconnected" id=809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145 namespace=k8s.io Apr 16 00:50:32.161976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145-rootfs.mount: Deactivated successfully. Apr 16 00:50:32.163167 containerd[1500]: time="2026-04-16T00:50:32.162826821Z" level=warning msg="cleaning up after shim disconnected" id=809b135a20388c2e2b50784ec410ee55519a0e5a1f343208bd66f4395d74c145 namespace=k8s.io Apr 16 00:50:32.163167 containerd[1500]: time="2026-04-16T00:50:32.162865178Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 16 00:50:32.221218 kubelet[2699]: I0416 00:50:32.221157 2699 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 16 00:50:32.444305 containerd[1500]: time="2026-04-16T00:50:32.444242503Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 00:50:32.484617 systemd[1]: Created slice kubepods-besteffort-podbf0b134d_2aea_439f_880e_27db3db5afb6.slice - libcontainer container kubepods-besteffort-podbf0b134d_2aea_439f_880e_27db3db5afb6.slice. Apr 16 00:50:32.487864 containerd[1500]: time="2026-04-16T00:50:32.487769508Z" level=info msg="CreateContainer within sandbox \"224419e6995e5b6314c754f7884a35f1ca93e95ad6bc656eb96bba6e08762fee\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5012a8f8233c984653d10b1163832617e0f5535b57f3c1b2860bed23363e8ae4\"" Apr 16 00:50:32.500177 containerd[1500]: time="2026-04-16T00:50:32.500107863Z" level=info msg="StartContainer for \"5012a8f8233c984653d10b1163832617e0f5535b57f3c1b2860bed23363e8ae4\"" Apr 16 00:50:32.509534 systemd[1]: Created slice kubepods-burstable-pod2042a77a_7eb0_4e4b_b526_42c43f2e5758.slice - libcontainer container kubepods-burstable-pod2042a77a_7eb0_4e4b_b526_42c43f2e5758.slice. Apr 16 00:50:32.534715 kubelet[2699]: I0416 00:50:32.534503 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/485d8d0d-2cc8-413a-863b-a0c37bab4a01-calico-apiserver-certs\") pod \"calico-apiserver-77f78848ff-qvl6m\" (UID: \"485d8d0d-2cc8-413a-863b-a0c37bab4a01\") " pod="calico-system/calico-apiserver-77f78848ff-qvl6m" Apr 16 00:50:32.534715 kubelet[2699]: I0416 00:50:32.534625 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73-config-volume\") pod \"coredns-66bc5c9577-92s4s\" (UID: \"fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73\") " pod="kube-system/coredns-66bc5c9577-92s4s" Apr 16 00:50:32.534715 kubelet[2699]: I0416 00:50:32.534708 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b0b987cc-2500-4310-a6ca-9cb5017fbcca-calico-apiserver-certs\") pod \"calico-apiserver-77f78848ff-bqnrn\" (UID: \"b0b987cc-2500-4310-a6ca-9cb5017fbcca\") " pod="calico-system/calico-apiserver-77f78848ff-bqnrn" Apr 16 00:50:32.535862 kubelet[2699]: I0416 00:50:32.534993 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-ca-bundle\") pod \"whisker-54d9b9dc68-q4rc6\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " pod="calico-system/whisker-54d9b9dc68-q4rc6" Apr 16 00:50:32.535862 kubelet[2699]: I0416 00:50:32.535029 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85zpc\" (UniqueName: \"kubernetes.io/projected/b0b987cc-2500-4310-a6ca-9cb5017fbcca-kube-api-access-85zpc\") pod \"calico-apiserver-77f78848ff-bqnrn\" (UID: \"b0b987cc-2500-4310-a6ca-9cb5017fbcca\") " pod="calico-system/calico-apiserver-77f78848ff-bqnrn" Apr 16 00:50:32.535862 kubelet[2699]: I0416 00:50:32.535293 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e73dc73-5c52-427a-852c-44daf423421f-tigera-ca-bundle\") pod \"calico-kube-controllers-7bbddd6b8d-wmztx\" (UID: \"9e73dc73-5c52-427a-852c-44daf423421f\") " pod="calico-system/calico-kube-controllers-7bbddd6b8d-wmztx" Apr 16 00:50:32.535862 kubelet[2699]: I0416 00:50:32.535330 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhl4f\" (UniqueName: \"kubernetes.io/projected/9e73dc73-5c52-427a-852c-44daf423421f-kube-api-access-nhl4f\") pod \"calico-kube-controllers-7bbddd6b8d-wmztx\" (UID: \"9e73dc73-5c52-427a-852c-44daf423421f\") " pod="calico-system/calico-kube-controllers-7bbddd6b8d-wmztx" Apr 16 00:50:32.535862 kubelet[2699]: I0416 00:50:32.535361 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf0b134d-2aea-439f-880e-27db3db5afb6-config\") pod \"goldmane-cccfbd5cf-tzjcc\" (UID: \"bf0b134d-2aea-439f-880e-27db3db5afb6\") " pod="calico-system/goldmane-cccfbd5cf-tzjcc" Apr 16 00:50:32.535493 systemd[1]: Created slice kubepods-burstable-podfdf4c7a4_7fde_4e7a_a0f4_b19bb5b42f73.slice - libcontainer container kubepods-burstable-podfdf4c7a4_7fde_4e7a_a0f4_b19bb5b42f73.slice. Apr 16 00:50:32.536761 kubelet[2699]: I0416 00:50:32.536507 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgqpc\" (UniqueName: \"kubernetes.io/projected/bf0b134d-2aea-439f-880e-27db3db5afb6-kube-api-access-jgqpc\") pod \"goldmane-cccfbd5cf-tzjcc\" (UID: \"bf0b134d-2aea-439f-880e-27db3db5afb6\") " pod="calico-system/goldmane-cccfbd5cf-tzjcc" Apr 16 00:50:32.536761 kubelet[2699]: I0416 00:50:32.536583 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-nginx-config\") pod \"whisker-54d9b9dc68-q4rc6\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " pod="calico-system/whisker-54d9b9dc68-q4rc6" Apr 16 00:50:32.536761 kubelet[2699]: I0416 00:50:32.536616 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-backend-key-pair\") pod \"whisker-54d9b9dc68-q4rc6\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " pod="calico-system/whisker-54d9b9dc68-q4rc6" Apr 16 00:50:32.536761 kubelet[2699]: I0416 00:50:32.536656 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lv2j\" (UniqueName: \"kubernetes.io/projected/2042a77a-7eb0-4e4b-b526-42c43f2e5758-kube-api-access-6lv2j\") pod \"coredns-66bc5c9577-qbpkz\" (UID: \"2042a77a-7eb0-4e4b-b526-42c43f2e5758\") " pod="kube-system/coredns-66bc5c9577-qbpkz" Apr 16 00:50:32.536761 kubelet[2699]: I0416 00:50:32.536685 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6246t\" (UniqueName: \"kubernetes.io/projected/fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73-kube-api-access-6246t\") pod \"coredns-66bc5c9577-92s4s\" (UID: \"fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73\") " pod="kube-system/coredns-66bc5c9577-92s4s" Apr 16 00:50:32.537029 kubelet[2699]: I0416 00:50:32.536735 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5ls\" (UniqueName: \"kubernetes.io/projected/a75fef46-43fd-43b1-8d18-836eda8a805f-kube-api-access-vp5ls\") pod \"whisker-54d9b9dc68-q4rc6\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " pod="calico-system/whisker-54d9b9dc68-q4rc6" Apr 16 00:50:32.537029 kubelet[2699]: I0416 00:50:32.536802 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2042a77a-7eb0-4e4b-b526-42c43f2e5758-config-volume\") pod \"coredns-66bc5c9577-qbpkz\" (UID: \"2042a77a-7eb0-4e4b-b526-42c43f2e5758\") " pod="kube-system/coredns-66bc5c9577-qbpkz" Apr 16 00:50:32.537029 kubelet[2699]: I0416 00:50:32.536859 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8p89\" (UniqueName: \"kubernetes.io/projected/485d8d0d-2cc8-413a-863b-a0c37bab4a01-kube-api-access-g8p89\") pod \"calico-apiserver-77f78848ff-qvl6m\" (UID: \"485d8d0d-2cc8-413a-863b-a0c37bab4a01\") " pod="calico-system/calico-apiserver-77f78848ff-qvl6m" Apr 16 00:50:32.537029 kubelet[2699]: I0416 00:50:32.536924 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf0b134d-2aea-439f-880e-27db3db5afb6-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-tzjcc\" (UID: \"bf0b134d-2aea-439f-880e-27db3db5afb6\") " pod="calico-system/goldmane-cccfbd5cf-tzjcc" Apr 16 00:50:32.537029 kubelet[2699]: I0416 00:50:32.537012 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bf0b134d-2aea-439f-880e-27db3db5afb6-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-tzjcc\" (UID: \"bf0b134d-2aea-439f-880e-27db3db5afb6\") " pod="calico-system/goldmane-cccfbd5cf-tzjcc" Apr 16 00:50:32.580409 systemd[1]: Created slice kubepods-besteffort-pod9e73dc73_5c52_427a_852c_44daf423421f.slice - libcontainer container kubepods-besteffort-pod9e73dc73_5c52_427a_852c_44daf423421f.slice. Apr 16 00:50:32.598215 systemd[1]: Started cri-containerd-5012a8f8233c984653d10b1163832617e0f5535b57f3c1b2860bed23363e8ae4.scope - libcontainer container 5012a8f8233c984653d10b1163832617e0f5535b57f3c1b2860bed23363e8ae4. Apr 16 00:50:32.599635 systemd[1]: Created slice kubepods-besteffort-poda75fef46_43fd_43b1_8d18_836eda8a805f.slice - libcontainer container kubepods-besteffort-poda75fef46_43fd_43b1_8d18_836eda8a805f.slice. Apr 16 00:50:32.614452 systemd[1]: Created slice kubepods-besteffort-pod485d8d0d_2cc8_413a_863b_a0c37bab4a01.slice - libcontainer container kubepods-besteffort-pod485d8d0d_2cc8_413a_863b_a0c37bab4a01.slice. Apr 16 00:50:32.630317 systemd[1]: Created slice kubepods-besteffort-podb0b987cc_2500_4310_a6ca_9cb5017fbcca.slice - libcontainer container kubepods-besteffort-podb0b987cc_2500_4310_a6ca_9cb5017fbcca.slice. Apr 16 00:50:32.729873 containerd[1500]: time="2026-04-16T00:50:32.729734218Z" level=info msg="StartContainer for \"5012a8f8233c984653d10b1163832617e0f5535b57f3c1b2860bed23363e8ae4\" returns successfully" Apr 16 00:50:32.807464 containerd[1500]: time="2026-04-16T00:50:32.806773023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tzjcc,Uid:bf0b134d-2aea-439f-880e-27db3db5afb6,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:32.831670 containerd[1500]: time="2026-04-16T00:50:32.830856130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qbpkz,Uid:2042a77a-7eb0-4e4b-b526-42c43f2e5758,Namespace:kube-system,Attempt:0,}" Apr 16 00:50:32.891022 containerd[1500]: time="2026-04-16T00:50:32.890923885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92s4s,Uid:fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73,Namespace:kube-system,Attempt:0,}" Apr 16 00:50:32.916429 containerd[1500]: time="2026-04-16T00:50:32.916367233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbddd6b8d-wmztx,Uid:9e73dc73-5c52-427a-852c-44daf423421f,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:32.922433 containerd[1500]: time="2026-04-16T00:50:32.922383473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d9b9dc68-q4rc6,Uid:a75fef46-43fd-43b1-8d18-836eda8a805f,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:32.929353 containerd[1500]: time="2026-04-16T00:50:32.929307748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-qvl6m,Uid:485d8d0d-2cc8-413a-863b-a0c37bab4a01,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:32.938076 containerd[1500]: time="2026-04-16T00:50:32.938041238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-bqnrn,Uid:b0b987cc-2500-4310-a6ca-9cb5017fbcca,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:33.101018 systemd[1]: Created slice kubepods-besteffort-pod2c74cad1_a379_4d46_b91e_dca9240bc056.slice - libcontainer container kubepods-besteffort-pod2c74cad1_a379_4d46_b91e_dca9240bc056.slice. Apr 16 00:50:33.116121 containerd[1500]: time="2026-04-16T00:50:33.115915569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f5m5,Uid:2c74cad1-a379-4d46-b91e-dca9240bc056,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:33.588968 kubelet[2699]: I0416 00:50:33.587716 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l9qfj" podStartSLOduration=3.480483639 podStartE2EDuration="28.55224614s" podCreationTimestamp="2026-04-16 00:50:05 +0000 UTC" firstStartedPulling="2026-04-16 00:50:05.74174989 +0000 UTC m=+21.855606138" lastFinishedPulling="2026-04-16 00:50:30.813512394 +0000 UTC m=+46.927368639" observedRunningTime="2026-04-16 00:50:33.511996839 +0000 UTC m=+49.625853098" watchObservedRunningTime="2026-04-16 00:50:33.55224614 +0000 UTC m=+49.666102399" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:33.827 [INFO][3790] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:33.831 [INFO][3790] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" iface="eth0" netns="/var/run/netns/cni-13cfbb96-fc39-74fa-8050-34d0b2317e3f" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:33.833 [INFO][3790] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" iface="eth0" netns="/var/run/netns/cni-13cfbb96-fc39-74fa-8050-34d0b2317e3f" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:33.839 [INFO][3790] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" iface="eth0" netns="/var/run/netns/cni-13cfbb96-fc39-74fa-8050-34d0b2317e3f" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:33.839 [INFO][3790] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:33.839 [INFO][3790] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:34.308 [INFO][3830] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" HandleID="k8s-pod-network.98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" Workload="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:34.310 [INFO][3830] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:34.310 [INFO][3830] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:34.333 [WARNING][3830] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" HandleID="k8s-pod-network.98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" Workload="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:34.333 [INFO][3830] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" HandleID="k8s-pod-network.98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" Workload="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:34.341 [INFO][3830] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.387083 containerd[1500]: 2026-04-16 00:50:34.377 [INFO][3790] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.126 [INFO][3805] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.130 [INFO][3805] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" iface="eth0" netns="/var/run/netns/cni-db670342-635b-2b87-abda-3d027f6eceff" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.130 [INFO][3805] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" iface="eth0" netns="/var/run/netns/cni-db670342-635b-2b87-abda-3d027f6eceff" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.133 [INFO][3805] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" iface="eth0" netns="/var/run/netns/cni-db670342-635b-2b87-abda-3d027f6eceff" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.133 [INFO][3805] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.133 [INFO][3805] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.308 [INFO][3884] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" HandleID="k8s-pod-network.01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" Workload="srv--57yav.gb1.brightbox.com-k8s-whisker--54d9b9dc68--q4rc6-eth0" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.310 [INFO][3884] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.346 [INFO][3884] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.364 [WARNING][3884] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" HandleID="k8s-pod-network.01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" Workload="srv--57yav.gb1.brightbox.com-k8s-whisker--54d9b9dc68--q4rc6-eth0" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.364 [INFO][3884] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" HandleID="k8s-pod-network.01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" Workload="srv--57yav.gb1.brightbox.com-k8s-whisker--54d9b9dc68--q4rc6-eth0" Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.369 [INFO][3884] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.391651 containerd[1500]: 2026-04-16 00:50:34.375 [INFO][3805] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86" Apr 16 00:50:34.400664 systemd[1]: run-netns-cni\x2ddb670342\x2d635b\x2d2b87\x2dabda\x2d3d027f6eceff.mount: Deactivated successfully. Apr 16 00:50:34.410800 systemd[1]: run-netns-cni\x2d13cfbb96\x2dfc39\x2d74fa\x2d8050\x2d34d0b2317e3f.mount: Deactivated successfully. Apr 16 00:50:34.412689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533-shm.mount: Deactivated successfully. Apr 16 00:50:34.412836 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86-shm.mount: Deactivated successfully. Apr 16 00:50:34.424767 containerd[1500]: time="2026-04-16T00:50:34.424641543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f5m5,Uid:2c74cad1-a379-4d46-b91e-dca9240bc056,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.455074 containerd[1500]: time="2026-04-16T00:50:34.454987634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d9b9dc68-q4rc6,Uid:a75fef46-43fd-43b1-8d18-836eda8a805f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:33.913 [INFO][3794] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:33.918 [INFO][3794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" iface="eth0" netns="/var/run/netns/cni-3a8d165e-ece9-8436-ea73-820a1e80559d" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:33.918 [INFO][3794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" iface="eth0" netns="/var/run/netns/cni-3a8d165e-ece9-8436-ea73-820a1e80559d" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:33.920 [INFO][3794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" iface="eth0" netns="/var/run/netns/cni-3a8d165e-ece9-8436-ea73-820a1e80559d" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:33.920 [INFO][3794] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:33.921 [INFO][3794] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:34.323 [INFO][3844] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" HandleID="k8s-pod-network.7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:34.324 [INFO][3844] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:34.369 [INFO][3844] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:34.438 [WARNING][3844] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" HandleID="k8s-pod-network.7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:34.438 [INFO][3844] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" HandleID="k8s-pod-network.7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:34.445 [INFO][3844] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.490597 containerd[1500]: 2026-04-16 00:50:34.474 [INFO][3794] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3" Apr 16 00:50:34.496218 systemd[1]: run-netns-cni\x2d3a8d165e\x2dece9\x2d8436\x2dea73\x2d820a1e80559d.mount: Deactivated successfully. Apr 16 00:50:34.496385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3-shm.mount: Deactivated successfully. Apr 16 00:50:34.512209 containerd[1500]: time="2026-04-16T00:50:34.512147843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-bqnrn,Uid:b0b987cc-2500-4310-a6ca-9cb5017fbcca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:33.822 [INFO][3804] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:33.828 [INFO][3804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" iface="eth0" netns="/var/run/netns/cni-8186a09b-e723-ae15-b9ae-48e1e34ba930" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:33.830 [INFO][3804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" iface="eth0" netns="/var/run/netns/cni-8186a09b-e723-ae15-b9ae-48e1e34ba930" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:33.839 [INFO][3804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" iface="eth0" netns="/var/run/netns/cni-8186a09b-e723-ae15-b9ae-48e1e34ba930" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:33.839 [INFO][3804] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:33.839 [INFO][3804] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:34.334 [INFO][3829] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" HandleID="k8s-pod-network.23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" Workload="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:34.335 [INFO][3829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:34.446 [INFO][3829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:34.489 [WARNING][3829] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" HandleID="k8s-pod-network.23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" Workload="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:34.489 [INFO][3829] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" HandleID="k8s-pod-network.23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" Workload="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:34.508 [INFO][3829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.532544 containerd[1500]: 2026-04-16 00:50:34.520 [INFO][3804] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee" Apr 16 00:50:34.540304 systemd[1]: run-netns-cni\x2d8186a09b\x2de723\x2dae15\x2db9ae\x2d48e1e34ba930.mount: Deactivated successfully. Apr 16 00:50:34.540855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee-shm.mount: Deactivated successfully. Apr 16 00:50:34.543663 kubelet[2699]: E0416 00:50:34.543110 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.549452 kubelet[2699]: E0416 00:50:34.546518 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4f5m5" Apr 16 00:50:34.549452 kubelet[2699]: E0416 00:50:34.548873 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4f5m5" Apr 16 00:50:34.565702 containerd[1500]: time="2026-04-16T00:50:34.562567600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tzjcc,Uid:bf0b134d-2aea-439f-880e-27db3db5afb6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.565893 kubelet[2699]: E0416 00:50:34.563364 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.565893 kubelet[2699]: E0416 00:50:34.563467 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77f78848ff-bqnrn" Apr 16 00:50:34.565893 kubelet[2699]: E0416 00:50:34.563496 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77f78848ff-bqnrn" Apr 16 00:50:34.566105 kubelet[2699]: E0416 00:50:34.563614 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77f78848ff-bqnrn_calico-system(b0b987cc-2500-4310-a6ca-9cb5017fbcca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77f78848ff-bqnrn_calico-system(b0b987cc-2500-4310-a6ca-9cb5017fbcca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ea532b45adf766b58397f81b3f522548aa0443adff4d8b5c552fab50b2ff1e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77f78848ff-bqnrn" podUID="b0b987cc-2500-4310-a6ca-9cb5017fbcca" Apr 16 00:50:34.567807 kubelet[2699]: E0416 00:50:34.531663 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.567967 kubelet[2699]: E0416 00:50:34.567939 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01ed609e68cd68f646c507451a46c9688770f84b0ca2565dda7524b9ecac3c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54d9b9dc68-q4rc6" Apr 16 00:50:34.568135 kubelet[2699]: E0416 00:50:34.568088 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4f5m5_calico-system(2c74cad1-a379-4d46-b91e-dca9240bc056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4f5m5_calico-system(2c74cad1-a379-4d46-b91e-dca9240bc056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98e15aad1fd884213348c76a6ce186e5b3c5fd370dc862d800b8523c47d99533\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4f5m5" podUID="2c74cad1-a379-4d46-b91e-dca9240bc056" Apr 16 00:50:34.568498 kubelet[2699]: E0416 00:50:34.568378 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.570367 kubelet[2699]: E0416 00:50:34.569919 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tzjcc" Apr 16 00:50:34.570367 kubelet[2699]: E0416 00:50:34.570008 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-tzjcc" Apr 16 00:50:34.570367 kubelet[2699]: E0416 00:50:34.570084 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-tzjcc_calico-system(bf0b134d-2aea-439f-880e-27db3db5afb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-tzjcc_calico-system(bf0b134d-2aea-439f-880e-27db3db5afb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23b46172055242436d41058d94121908a70702caaddbbe704433bceda00e7cee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-tzjcc" podUID="bf0b134d-2aea-439f-880e-27db3db5afb6" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.077 [INFO][3754] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.077 [INFO][3754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" iface="eth0" netns="/var/run/netns/cni-2caf030c-c2fb-77dc-695d-97323908424d" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.078 [INFO][3754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" iface="eth0" netns="/var/run/netns/cni-2caf030c-c2fb-77dc-695d-97323908424d" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.089 [INFO][3754] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" iface="eth0" netns="/var/run/netns/cni-2caf030c-c2fb-77dc-695d-97323908424d" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.089 [INFO][3754] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.089 [INFO][3754] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.366 [INFO][3876] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" HandleID="k8s-pod-network.3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.366 [INFO][3876] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.509 [INFO][3876] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.555 [WARNING][3876] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" HandleID="k8s-pod-network.3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.555 [INFO][3876] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" HandleID="k8s-pod-network.3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.558 [INFO][3876] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.577438 containerd[1500]: 2026-04-16 00:50:34.572 [INFO][3754] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304" Apr 16 00:50:34.605958 kubelet[2699]: I0416 00:50:34.603780 2699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 00:50:34.616134 containerd[1500]: time="2026-04-16T00:50:34.613689844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qbpkz,Uid:2042a77a-7eb0-4e4b-b526-42c43f2e5758,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.616266 kubelet[2699]: E0416 00:50:34.614157 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.616266 kubelet[2699]: E0416 00:50:34.614257 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qbpkz" Apr 16 00:50:34.616266 kubelet[2699]: E0416 00:50:34.614307 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-qbpkz" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.149 [INFO][3813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.150 [INFO][3813] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" iface="eth0" netns="/var/run/netns/cni-6923135e-c8f2-6e67-2efc-55f4d57752fd" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.152 [INFO][3813] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" iface="eth0" netns="/var/run/netns/cni-6923135e-c8f2-6e67-2efc-55f4d57752fd" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.152 [INFO][3813] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" iface="eth0" netns="/var/run/netns/cni-6923135e-c8f2-6e67-2efc-55f4d57752fd" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.152 [INFO][3813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.152 [INFO][3813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.465 [INFO][3892] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" HandleID="k8s-pod-network.e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.466 [INFO][3892] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.558 [INFO][3892] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.603 [WARNING][3892] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" HandleID="k8s-pod-network.e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.603 [INFO][3892] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" HandleID="k8s-pod-network.e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.616 [INFO][3892] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.637242 containerd[1500]: 2026-04-16 00:50:34.627 [INFO][3813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc" Apr 16 00:50:34.643783 kubelet[2699]: E0416 00:50:34.642217 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-qbpkz_kube-system(2042a77a-7eb0-4e4b-b526-42c43f2e5758)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-qbpkz_kube-system(2042a77a-7eb0-4e4b-b526-42c43f2e5758)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-qbpkz" podUID="2042a77a-7eb0-4e4b-b526-42c43f2e5758" Apr 16 00:50:34.648800 containerd[1500]: time="2026-04-16T00:50:34.648748859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbddd6b8d-wmztx,Uid:9e73dc73-5c52-427a-852c-44daf423421f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.650939 kubelet[2699]: E0416 00:50:34.649357 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.650939 kubelet[2699]: E0416 00:50:34.649430 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bbddd6b8d-wmztx" Apr 16 00:50:34.650939 kubelet[2699]: E0416 00:50:34.649480 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bbddd6b8d-wmztx" Apr 16 00:50:34.651137 kubelet[2699]: E0416 00:50:34.649597 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bbddd6b8d-wmztx_calico-system(9e73dc73-5c52-427a-852c-44daf423421f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bbddd6b8d-wmztx_calico-system(9e73dc73-5c52-427a-852c-44daf423421f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bbddd6b8d-wmztx" podUID="9e73dc73-5c52-427a-852c-44daf423421f" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.016 [INFO][3753] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.025 [INFO][3753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" iface="eth0" netns="/var/run/netns/cni-d157fb83-76eb-1bc1-0e69-1b9553e7af28" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.027 [INFO][3753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" iface="eth0" netns="/var/run/netns/cni-d157fb83-76eb-1bc1-0e69-1b9553e7af28" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.053 [INFO][3753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" iface="eth0" netns="/var/run/netns/cni-d157fb83-76eb-1bc1-0e69-1b9553e7af28" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.053 [INFO][3753] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.053 [INFO][3753] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.496 [INFO][3868] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" HandleID="k8s-pod-network.666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.518 [INFO][3868] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.617 [INFO][3868] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.660 [WARNING][3868] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" HandleID="k8s-pod-network.666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.660 [INFO][3868] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" HandleID="k8s-pod-network.666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.669 [INFO][3868] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.689537 containerd[1500]: 2026-04-16 00:50:34.683 [INFO][3753] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c" Apr 16 00:50:34.699549 containerd[1500]: time="2026-04-16T00:50:34.699283010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92s4s,Uid:fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.107 [INFO][3785] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.108 [INFO][3785] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" iface="eth0" netns="/var/run/netns/cni-08f4f2c1-ee57-ba27-d8fa-2c0d9d3c7c43" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.109 [INFO][3785] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" iface="eth0" netns="/var/run/netns/cni-08f4f2c1-ee57-ba27-d8fa-2c0d9d3c7c43" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.114 [INFO][3785] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" iface="eth0" netns="/var/run/netns/cni-08f4f2c1-ee57-ba27-d8fa-2c0d9d3c7c43" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.115 [INFO][3785] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.115 [INFO][3785] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.566 [INFO][3879] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" HandleID="k8s-pod-network.cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.566 [INFO][3879] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.668 [INFO][3879] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.694 [WARNING][3879] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" HandleID="k8s-pod-network.cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.694 [INFO][3879] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" HandleID="k8s-pod-network.cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.700 [INFO][3879] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:34.708526 containerd[1500]: 2026-04-16 00:50:34.704 [INFO][3785] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae" Apr 16 00:50:34.719368 kubelet[2699]: E0416 00:50:34.717975 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.719368 kubelet[2699]: E0416 00:50:34.718070 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-92s4s" Apr 16 00:50:34.721031 kubelet[2699]: E0416 00:50:34.718222 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-92s4s" Apr 16 00:50:34.721407 kubelet[2699]: E0416 00:50:34.721233 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-92s4s_kube-system(fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-92s4s_kube-system(fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-92s4s" podUID="fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73" Apr 16 00:50:34.745913 containerd[1500]: time="2026-04-16T00:50:34.745620517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-qvl6m,Uid:485d8d0d-2cc8-413a-863b-a0c37bab4a01,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.754823 kubelet[2699]: E0416 00:50:34.754737 2699 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 00:50:34.755079 kubelet[2699]: E0416 00:50:34.754998 2699 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77f78848ff-qvl6m" Apr 16 00:50:34.755382 kubelet[2699]: E0416 00:50:34.755345 2699 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-77f78848ff-qvl6m" Apr 16 00:50:34.756339 kubelet[2699]: E0416 00:50:34.755725 2699 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77f78848ff-qvl6m_calico-system(485d8d0d-2cc8-413a-863b-a0c37bab4a01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77f78848ff-qvl6m_calico-system(485d8d0d-2cc8-413a-863b-a0c37bab4a01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-77f78848ff-qvl6m" podUID="485d8d0d-2cc8-413a-863b-a0c37bab4a01" Apr 16 00:50:35.396884 systemd[1]: run-netns-cni\x2d08f4f2c1\x2dee57\x2dba27\x2dd8fa\x2d2c0d9d3c7c43.mount: Deactivated successfully. Apr 16 00:50:35.397072 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd7291476b673130f753fe0e82b184cf2467b1f68bbd02cd9e857c36e8398eae-shm.mount: Deactivated successfully. Apr 16 00:50:35.397180 systemd[1]: run-netns-cni\x2d6923135e\x2dc8f2\x2d6e67\x2d2efc\x2d55f4d57752fd.mount: Deactivated successfully. Apr 16 00:50:35.397280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4a0024d8984ae69fc9b639afbf7c528fd243b7bc3697dc0d4c7203c7bd45fdc-shm.mount: Deactivated successfully. Apr 16 00:50:35.397629 systemd[1]: run-netns-cni\x2dd157fb83\x2d76eb\x2d1bc1\x2d0e69\x2d1b9553e7af28.mount: Deactivated successfully. Apr 16 00:50:35.397754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-666297f66422a61396958b75c040d768d1c1e1972e378cf710b06ca2d35b283c-shm.mount: Deactivated successfully. Apr 16 00:50:35.397871 systemd[1]: run-netns-cni\x2d2caf030c\x2dc2fb\x2d77dc\x2d695d\x2d97323908424d.mount: Deactivated successfully. Apr 16 00:50:35.398005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3c5e797fd91d8e6e8100ef01d56821469b2ec13b8a52896798e3287ab0880304-shm.mount: Deactivated successfully. Apr 16 00:50:35.593666 containerd[1500]: time="2026-04-16T00:50:35.593212596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-qvl6m,Uid:485d8d0d-2cc8-413a-863b-a0c37bab4a01,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:35.606244 containerd[1500]: time="2026-04-16T00:50:35.606203501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tzjcc,Uid:bf0b134d-2aea-439f-880e-27db3db5afb6,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:35.611948 containerd[1500]: time="2026-04-16T00:50:35.608993870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f5m5,Uid:2c74cad1-a379-4d46-b91e-dca9240bc056,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:35.612697 containerd[1500]: time="2026-04-16T00:50:35.612656854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbddd6b8d-wmztx,Uid:9e73dc73-5c52-427a-852c-44daf423421f,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:35.622195 containerd[1500]: time="2026-04-16T00:50:35.619756133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-bqnrn,Uid:b0b987cc-2500-4310-a6ca-9cb5017fbcca,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:35.622691 containerd[1500]: time="2026-04-16T00:50:35.622659712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qbpkz,Uid:2042a77a-7eb0-4e4b-b526-42c43f2e5758,Namespace:kube-system,Attempt:0,}" Apr 16 00:50:35.626386 containerd[1500]: time="2026-04-16T00:50:35.626110648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92s4s,Uid:fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73,Namespace:kube-system,Attempt:0,}" Apr 16 00:50:35.706991 kubelet[2699]: I0416 00:50:35.703895 2699 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5ls\" (UniqueName: \"kubernetes.io/projected/a75fef46-43fd-43b1-8d18-836eda8a805f-kube-api-access-vp5ls\") pod \"a75fef46-43fd-43b1-8d18-836eda8a805f\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " Apr 16 00:50:35.779700 kubelet[2699]: I0416 00:50:35.775741 2699 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a75fef46-43fd-43b1-8d18-836eda8a805f-kube-api-access-vp5ls" (OuterVolumeSpecName: "kube-api-access-vp5ls") pod "a75fef46-43fd-43b1-8d18-836eda8a805f" (UID: "a75fef46-43fd-43b1-8d18-836eda8a805f"). InnerVolumeSpecName "kube-api-access-vp5ls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 00:50:35.805466 kubelet[2699]: I0416 00:50:35.805404 2699 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-ca-bundle\") pod \"a75fef46-43fd-43b1-8d18-836eda8a805f\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " Apr 16 00:50:35.806522 kubelet[2699]: I0416 00:50:35.805497 2699 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-backend-key-pair\") pod \"a75fef46-43fd-43b1-8d18-836eda8a805f\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " Apr 16 00:50:35.806522 kubelet[2699]: I0416 00:50:35.805534 2699 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-nginx-config\") pod \"a75fef46-43fd-43b1-8d18-836eda8a805f\" (UID: \"a75fef46-43fd-43b1-8d18-836eda8a805f\") " Apr 16 00:50:35.806522 kubelet[2699]: I0416 00:50:35.805603 2699 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vp5ls\" (UniqueName: \"kubernetes.io/projected/a75fef46-43fd-43b1-8d18-836eda8a805f-kube-api-access-vp5ls\") on node \"srv-57yav.gb1.brightbox.com\" DevicePath \"\"" Apr 16 00:50:35.809285 kubelet[2699]: I0416 00:50:35.807361 2699 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a75fef46-43fd-43b1-8d18-836eda8a805f" (UID: "a75fef46-43fd-43b1-8d18-836eda8a805f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 00:50:35.809285 kubelet[2699]: I0416 00:50:35.808148 2699 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a75fef46-43fd-43b1-8d18-836eda8a805f" (UID: "a75fef46-43fd-43b1-8d18-836eda8a805f"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 00:50:35.814114 kubelet[2699]: I0416 00:50:35.814042 2699 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a75fef46-43fd-43b1-8d18-836eda8a805f" (UID: "a75fef46-43fd-43b1-8d18-836eda8a805f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 00:50:35.907422 kubelet[2699]: I0416 00:50:35.907296 2699 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-ca-bundle\") on node \"srv-57yav.gb1.brightbox.com\" DevicePath \"\"" Apr 16 00:50:35.907422 kubelet[2699]: I0416 00:50:35.907368 2699 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a75fef46-43fd-43b1-8d18-836eda8a805f-whisker-backend-key-pair\") on node \"srv-57yav.gb1.brightbox.com\" DevicePath \"\"" Apr 16 00:50:35.907422 kubelet[2699]: I0416 00:50:35.907388 2699 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a75fef46-43fd-43b1-8d18-836eda8a805f-nginx-config\") on node \"srv-57yav.gb1.brightbox.com\" DevicePath \"\"" Apr 16 00:50:36.153985 systemd[1]: Removed slice kubepods-besteffort-poda75fef46_43fd_43b1_8d18_836eda8a805f.slice - libcontainer container kubepods-besteffort-poda75fef46_43fd_43b1_8d18_836eda8a805f.slice. Apr 16 00:50:36.413836 systemd[1]: var-lib-kubelet-pods-a75fef46\x2d43fd\x2d43b1\x2d8d18\x2d836eda8a805f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvp5ls.mount: Deactivated successfully. Apr 16 00:50:36.414022 systemd[1]: var-lib-kubelet-pods-a75fef46\x2d43fd\x2d43b1\x2d8d18\x2d836eda8a805f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 00:50:36.638274 systemd-networkd[1433]: cali015187adcb2: Link UP Apr 16 00:50:36.649321 systemd-networkd[1433]: cali015187adcb2: Gained carrier Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.175 [ERROR][4022] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.238 [INFO][4022] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0 coredns-66bc5c9577- kube-system fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73 876 0 2026-04-16 00:49:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com coredns-66bc5c9577-92s4s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali015187adcb2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.238 [INFO][4022] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.421 [INFO][4114] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" HandleID="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.459 [INFO][4114] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" HandleID="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c640), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"coredns-66bc5c9577-92s4s", "timestamp":"2026-04-16 00:50:36.421805123 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c0dc0)} Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.470 [INFO][4114] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.471 [INFO][4114] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.471 [INFO][4114] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.484 [INFO][4114] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.505 [INFO][4114] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.530 [INFO][4114] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.535 [INFO][4114] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.538 [INFO][4114] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.538 [INFO][4114] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.546 [INFO][4114] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910 Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.560 [INFO][4114] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.580 [INFO][4114] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.580 [INFO][4114] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.580 [INFO][4114] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:36.714529 containerd[1500]: 2026-04-16 00:50:36.580 [INFO][4114] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" HandleID="k8s-pod-network.54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:36.723362 containerd[1500]: 2026-04-16 00:50:36.593 [INFO][4022] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-92s4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali015187adcb2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:36.723362 containerd[1500]: 2026-04-16 00:50:36.599 [INFO][4022] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.193/32] ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:36.723362 containerd[1500]: 2026-04-16 00:50:36.600 [INFO][4022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali015187adcb2 ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:36.723362 containerd[1500]: 2026-04-16 00:50:36.673 [INFO][4022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:36.723362 containerd[1500]: 2026-04-16 00:50:36.678 [INFO][4022] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910", Pod:"coredns-66bc5c9577-92s4s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali015187adcb2", MAC:"f6:42:6d:2c:0f:77", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:36.723745 containerd[1500]: 2026-04-16 00:50:36.703 [INFO][4022] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910" Namespace="kube-system" Pod="coredns-66bc5c9577-92s4s" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--92s4s-eth0" Apr 16 00:50:36.766505 systemd-networkd[1433]: cali88b92b6cc45: Link UP Apr 16 00:50:36.770198 systemd-networkd[1433]: cali88b92b6cc45: Gained carrier Apr 16 00:50:36.839562 systemd[1]: Created slice kubepods-besteffort-podd9796998_11a4_4044_ac5e_25ca08f8c089.slice - libcontainer container kubepods-besteffort-podd9796998_11a4_4044_ac5e_25ca08f8c089.slice. Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.162 [ERROR][4030] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.253 [INFO][4030] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0 calico-kube-controllers-7bbddd6b8d- calico-system 9e73dc73-5c52-427a-852c-44daf423421f 880 0 2026-04-16 00:50:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bbddd6b8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com calico-kube-controllers-7bbddd6b8d-wmztx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali88b92b6cc45 [] [] }} ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.253 [INFO][4030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.474 [INFO][4127] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" HandleID="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.493 [INFO][4127] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" HandleID="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000263f30), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"calico-kube-controllers-7bbddd6b8d-wmztx", "timestamp":"2026-04-16 00:50:36.474695634 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e9760)} Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.493 [INFO][4127] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.581 [INFO][4127] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.581 [INFO][4127] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.590 [INFO][4127] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.610 [INFO][4127] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.655 [INFO][4127] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.665 [INFO][4127] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.682 [INFO][4127] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.682 [INFO][4127] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.692 [INFO][4127] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455 Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.709 [INFO][4127] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.748 [INFO][4127] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.748 [INFO][4127] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.748 [INFO][4127] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:36.847575 containerd[1500]: 2026-04-16 00:50:36.748 [INFO][4127] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" HandleID="k8s-pod-network.73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:36.848626 containerd[1500]: 2026-04-16 00:50:36.755 [INFO][4030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0", GenerateName:"calico-kube-controllers-7bbddd6b8d-", Namespace:"calico-system", SelfLink:"", UID:"9e73dc73-5c52-427a-852c-44daf423421f", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbddd6b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7bbddd6b8d-wmztx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88b92b6cc45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:36.848626 containerd[1500]: 2026-04-16 00:50:36.756 [INFO][4030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.194/32] ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:36.848626 containerd[1500]: 2026-04-16 00:50:36.757 [INFO][4030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88b92b6cc45 ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:36.848626 containerd[1500]: 2026-04-16 00:50:36.769 [INFO][4030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:36.848626 containerd[1500]: 2026-04-16 00:50:36.771 [INFO][4030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0", GenerateName:"calico-kube-controllers-7bbddd6b8d-", Namespace:"calico-system", SelfLink:"", UID:"9e73dc73-5c52-427a-852c-44daf423421f", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bbddd6b8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455", Pod:"calico-kube-controllers-7bbddd6b8d-wmztx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali88b92b6cc45", MAC:"f2:93:3a:98:e3:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:36.848626 containerd[1500]: 2026-04-16 00:50:36.825 [INFO][4030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455" Namespace="calico-system" Pod="calico-kube-controllers-7bbddd6b8d-wmztx" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--kube--controllers--7bbddd6b8d--wmztx-eth0" Apr 16 00:50:36.885495 containerd[1500]: time="2026-04-16T00:50:36.885350161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:36.885495 containerd[1500]: time="2026-04-16T00:50:36.885435984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:36.885982 containerd[1500]: time="2026-04-16T00:50:36.885739920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:36.885982 containerd[1500]: time="2026-04-16T00:50:36.885903547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:36.927378 kubelet[2699]: I0416 00:50:36.927327 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjg4d\" (UniqueName: \"kubernetes.io/projected/d9796998-11a4-4044-ac5e-25ca08f8c089-kube-api-access-hjg4d\") pod \"whisker-8587994885-rs2d6\" (UID: \"d9796998-11a4-4044-ac5e-25ca08f8c089\") " pod="calico-system/whisker-8587994885-rs2d6" Apr 16 00:50:36.928553 kubelet[2699]: I0416 00:50:36.928219 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d9796998-11a4-4044-ac5e-25ca08f8c089-nginx-config\") pod \"whisker-8587994885-rs2d6\" (UID: \"d9796998-11a4-4044-ac5e-25ca08f8c089\") " pod="calico-system/whisker-8587994885-rs2d6" Apr 16 00:50:36.928553 kubelet[2699]: I0416 00:50:36.928308 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9796998-11a4-4044-ac5e-25ca08f8c089-whisker-ca-bundle\") pod \"whisker-8587994885-rs2d6\" (UID: \"d9796998-11a4-4044-ac5e-25ca08f8c089\") " pod="calico-system/whisker-8587994885-rs2d6" Apr 16 00:50:36.928718 kubelet[2699]: I0416 00:50:36.928410 2699 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d9796998-11a4-4044-ac5e-25ca08f8c089-whisker-backend-key-pair\") pod \"whisker-8587994885-rs2d6\" (UID: \"d9796998-11a4-4044-ac5e-25ca08f8c089\") " pod="calico-system/whisker-8587994885-rs2d6" Apr 16 00:50:36.954917 containerd[1500]: time="2026-04-16T00:50:36.949902634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:36.954917 containerd[1500]: time="2026-04-16T00:50:36.950137724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:36.954917 containerd[1500]: time="2026-04-16T00:50:36.950170809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:36.954917 containerd[1500]: time="2026-04-16T00:50:36.952283922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.006870 systemd[1]: run-containerd-runc-k8s.io-73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455-runc.4FOTp5.mount: Deactivated successfully. Apr 16 00:50:37.029663 systemd[1]: Started cri-containerd-54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910.scope - libcontainer container 54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910. Apr 16 00:50:37.037136 systemd[1]: Started cri-containerd-73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455.scope - libcontainer container 73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455. Apr 16 00:50:37.075584 systemd-networkd[1433]: calif334cb0b1a0: Link UP Apr 16 00:50:37.085797 systemd-networkd[1433]: calif334cb0b1a0: Gained carrier Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:35.968 [ERROR][3960] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.055 [INFO][3960] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0 calico-apiserver-77f78848ff- calico-system 485d8d0d-2cc8-413a-863b-a0c37bab4a01 878 0 2026-04-16 00:50:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77f78848ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com calico-apiserver-77f78848ff-qvl6m eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif334cb0b1a0 [] [] }} ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.059 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.515 [INFO][4058] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" HandleID="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.543 [INFO][4058] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" HandleID="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122a50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"calico-apiserver-77f78848ff-qvl6m", "timestamp":"2026-04-16 00:50:36.515734158 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000170580)} Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.543 [INFO][4058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.748 [INFO][4058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.748 [INFO][4058] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.763 [INFO][4058] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.887 [INFO][4058] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.962 [INFO][4058] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.980 [INFO][4058] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.988 [INFO][4058] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.989 [INFO][4058] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:36.994 [INFO][4058] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:37.024 [INFO][4058] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:37.054 [INFO][4058] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:37.054 [INFO][4058] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:37.061 [INFO][4058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:37.136147 containerd[1500]: 2026-04-16 00:50:37.061 [INFO][4058] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" HandleID="k8s-pod-network.becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:37.139642 containerd[1500]: 2026-04-16 00:50:37.068 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0", GenerateName:"calico-apiserver-77f78848ff-", Namespace:"calico-system", SelfLink:"", UID:"485d8d0d-2cc8-413a-863b-a0c37bab4a01", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f78848ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-77f78848ff-qvl6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif334cb0b1a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.139642 containerd[1500]: 2026-04-16 00:50:37.068 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.195/32] ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:37.139642 containerd[1500]: 2026-04-16 00:50:37.069 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif334cb0b1a0 ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:37.139642 containerd[1500]: 2026-04-16 00:50:37.095 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:37.139642 containerd[1500]: 2026-04-16 00:50:37.103 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0", GenerateName:"calico-apiserver-77f78848ff-", Namespace:"calico-system", SelfLink:"", UID:"485d8d0d-2cc8-413a-863b-a0c37bab4a01", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f78848ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b", Pod:"calico-apiserver-77f78848ff-qvl6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif334cb0b1a0", MAC:"26:a7:ea:58:39:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.139642 containerd[1500]: 2026-04-16 00:50:37.130 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-qvl6m" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--qvl6m-eth0" Apr 16 00:50:37.155077 containerd[1500]: time="2026-04-16T00:50:37.154534050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8587994885-rs2d6,Uid:d9796998-11a4-4044-ac5e-25ca08f8c089,Namespace:calico-system,Attempt:0,}" Apr 16 00:50:37.163368 systemd-networkd[1433]: cali391de30d6de: Link UP Apr 16 00:50:37.170383 systemd-networkd[1433]: cali391de30d6de: Gained carrier Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:36.108 [ERROR][4011] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:36.184 [INFO][4011] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0 coredns-66bc5c9577- kube-system 2042a77a-7eb0-4e4b-b526-42c43f2e5758 877 0 2026-04-16 00:49:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com coredns-66bc5c9577-qbpkz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali391de30d6de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:36.184 [INFO][4011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:36.546 [INFO][4103] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" HandleID="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:36.577 [INFO][4103] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" HandleID="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000556c30), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"coredns-66bc5c9577-qbpkz", "timestamp":"2026-04-16 00:50:36.546952607 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00069e000)} Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:36.578 [INFO][4103] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.058 [INFO][4103] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.059 [INFO][4103] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.071 [INFO][4103] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.097 [INFO][4103] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.111 [INFO][4103] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.114 [INFO][4103] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.117 [INFO][4103] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.117 [INFO][4103] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.120 [INFO][4103] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11 Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.126 [INFO][4103] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.147 [INFO][4103] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.147 [INFO][4103] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.147 [INFO][4103] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:37.317878 containerd[1500]: 2026-04-16 00:50:37.147 [INFO][4103] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" HandleID="k8s-pod-network.456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Workload="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:37.320194 containerd[1500]: 2026-04-16 00:50:37.154 [INFO][4011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2042a77a-7eb0-4e4b-b526-42c43f2e5758", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-qbpkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali391de30d6de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.320194 containerd[1500]: 2026-04-16 00:50:37.155 [INFO][4011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.196/32] ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:37.320194 containerd[1500]: 2026-04-16 00:50:37.155 [INFO][4011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali391de30d6de ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:37.320194 containerd[1500]: 2026-04-16 00:50:37.206 [INFO][4011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:37.320194 containerd[1500]: 2026-04-16 00:50:37.234 [INFO][4011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"2042a77a-7eb0-4e4b-b526-42c43f2e5758", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 49, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11", Pod:"coredns-66bc5c9577-qbpkz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali391de30d6de", MAC:"2a:1a:94:b8:45:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.320806 containerd[1500]: 2026-04-16 00:50:37.264 [INFO][4011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11" Namespace="kube-system" Pod="coredns-66bc5c9577-qbpkz" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-coredns--66bc5c9577--qbpkz-eth0" Apr 16 00:50:37.380202 systemd-networkd[1433]: cali6f2a9f51a42: Link UP Apr 16 00:50:37.386474 systemd-networkd[1433]: cali6f2a9f51a42: Gained carrier Apr 16 00:50:37.444309 containerd[1500]: time="2026-04-16T00:50:37.443864844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:37.444309 containerd[1500]: time="2026-04-16T00:50:37.444054342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:37.444309 containerd[1500]: time="2026-04-16T00:50:37.444078942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.445433 containerd[1500]: time="2026-04-16T00:50:37.444764609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.463102 containerd[1500]: time="2026-04-16T00:50:37.460401564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bbddd6b8d-wmztx,Uid:9e73dc73-5c52-427a-852c-44daf423421f,Namespace:calico-system,Attempt:0,} returns sandbox id \"73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455\"" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:36.016 [ERROR][3973] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:36.105 [INFO][3973] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0 csi-node-driver- calico-system 2c74cad1-a379-4d46-b91e-dca9240bc056 874 0 2026-04-16 00:50:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com csi-node-driver-4f5m5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6f2a9f51a42 [] [] }} ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:36.108 [INFO][3973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:36.565 [INFO][4075] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" HandleID="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Workload="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:36.597 [INFO][4075] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" HandleID="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Workload="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"csi-node-driver-4f5m5", "timestamp":"2026-04-16 00:50:36.565820521 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000208160)} Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:36.598 [INFO][4075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.147 [INFO][4075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.148 [INFO][4075] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.179 [INFO][4075] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.216 [INFO][4075] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.247 [INFO][4075] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.254 [INFO][4075] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.268 [INFO][4075] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.270 [INFO][4075] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.287 [INFO][4075] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4 Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.320 [INFO][4075] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.344 [INFO][4075] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.197/26] block=192.168.99.192/26 handle="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.347 [INFO][4075] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.197/26] handle="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.350 [INFO][4075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:37.499464 containerd[1500]: 2026-04-16 00:50:37.351 [INFO][4075] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.197/26] IPv6=[] ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" HandleID="k8s-pod-network.dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Workload="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:37.502936 containerd[1500]: 2026-04-16 00:50:37.370 [INFO][3973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c74cad1-a379-4d46-b91e-dca9240bc056", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-4f5m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f2a9f51a42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.502936 containerd[1500]: 2026-04-16 00:50:37.370 [INFO][3973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.197/32] ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:37.502936 containerd[1500]: 2026-04-16 00:50:37.370 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f2a9f51a42 ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:37.502936 containerd[1500]: 2026-04-16 00:50:37.392 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:37.502936 containerd[1500]: 2026-04-16 00:50:37.429 [INFO][3973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c74cad1-a379-4d46-b91e-dca9240bc056", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4", Pod:"csi-node-driver-4f5m5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6f2a9f51a42", MAC:"5a:31:93:18:0e:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.502936 containerd[1500]: 2026-04-16 00:50:37.477 [INFO][3973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4" Namespace="calico-system" Pod="csi-node-driver-4f5m5" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-csi--node--driver--4f5m5-eth0" Apr 16 00:50:37.511681 containerd[1500]: time="2026-04-16T00:50:37.511417140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-92s4s,Uid:fdf4c7a4-7fde-4e7a-a0f4-b19bb5b42f73,Namespace:kube-system,Attempt:0,} returns sandbox id \"54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910\"" Apr 16 00:50:37.520266 containerd[1500]: time="2026-04-16T00:50:37.520039558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 00:50:37.533579 containerd[1500]: time="2026-04-16T00:50:37.532994850Z" level=info msg="CreateContainer within sandbox \"54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 00:50:37.566564 systemd-networkd[1433]: cali9529a8a0b86: Link UP Apr 16 00:50:37.571685 systemd-networkd[1433]: cali9529a8a0b86: Gained carrier Apr 16 00:50:37.632733 systemd[1]: Started cri-containerd-becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b.scope - libcontainer container becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b. Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:36.047 [ERROR][3980] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:36.171 [INFO][3980] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0 calico-apiserver-77f78848ff- calico-system b0b987cc-2500-4310-a6ca-9cb5017fbcca 875 0 2026-04-16 00:50:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77f78848ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com calico-apiserver-77f78848ff-bqnrn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9529a8a0b86 [] [] }} ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:36.171 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:36.604 [INFO][4094] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" HandleID="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:36.661 [INFO][4094] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" HandleID="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366140), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"calico-apiserver-77f78848ff-bqnrn", "timestamp":"2026-04-16 00:50:36.604309198 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188000)} Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:36.661 [INFO][4094] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.350 [INFO][4094] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.355 [INFO][4094] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.367 [INFO][4094] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.388 [INFO][4094] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.429 [INFO][4094] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.443 [INFO][4094] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.466 [INFO][4094] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.466 [INFO][4094] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.477 [INFO][4094] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.505 [INFO][4094] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.531 [INFO][4094] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.198/26] block=192.168.99.192/26 handle="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.532 [INFO][4094] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.198/26] handle="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.535 [INFO][4094] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:37.634469 containerd[1500]: 2026-04-16 00:50:37.535 [INFO][4094] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.198/26] IPv6=[] ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" HandleID="k8s-pod-network.c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Workload="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:37.638475 containerd[1500]: 2026-04-16 00:50:37.547 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0", GenerateName:"calico-apiserver-77f78848ff-", Namespace:"calico-system", SelfLink:"", UID:"b0b987cc-2500-4310-a6ca-9cb5017fbcca", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f78848ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-77f78848ff-bqnrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9529a8a0b86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.638475 containerd[1500]: 2026-04-16 00:50:37.547 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.198/32] ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:37.638475 containerd[1500]: 2026-04-16 00:50:37.547 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9529a8a0b86 ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:37.638475 containerd[1500]: 2026-04-16 00:50:37.585 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:37.638475 containerd[1500]: 2026-04-16 00:50:37.592 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0", GenerateName:"calico-apiserver-77f78848ff-", Namespace:"calico-system", SelfLink:"", UID:"b0b987cc-2500-4310-a6ca-9cb5017fbcca", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77f78848ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff", Pod:"calico-apiserver-77f78848ff-bqnrn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9529a8a0b86", MAC:"8e:90:e9:57:a7:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.638475 containerd[1500]: 2026-04-16 00:50:37.620 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff" Namespace="calico-system" Pod="calico-apiserver-77f78848ff-bqnrn" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-calico--apiserver--77f78848ff--bqnrn-eth0" Apr 16 00:50:37.726200 containerd[1500]: time="2026-04-16T00:50:37.724037345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:37.726200 containerd[1500]: time="2026-04-16T00:50:37.724194217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:37.726200 containerd[1500]: time="2026-04-16T00:50:37.725253221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.730698 containerd[1500]: time="2026-04-16T00:50:37.729230835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.783372 systemd-networkd[1433]: cali0e95c2e52a1: Link UP Apr 16 00:50:37.791421 systemd-networkd[1433]: cali0e95c2e52a1: Gained carrier Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:36.120 [ERROR][3995] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:36.217 [INFO][3995] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0 goldmane-cccfbd5cf- calico-system bf0b134d-2aea-439f-880e-27db3db5afb6 873 0 2026-04-16 00:50:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com goldmane-cccfbd5cf-tzjcc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0e95c2e52a1 [] [] }} ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:36.218 [INFO][3995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:36.616 [INFO][4112] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" HandleID="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Workload="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:36.687 [INFO][4112] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" HandleID="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Workload="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005e0350), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"goldmane-cccfbd5cf-tzjcc", "timestamp":"2026-04-16 00:50:36.61657028 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e3760)} Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:36.687 [INFO][4112] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.535 [INFO][4112] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.535 [INFO][4112] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.552 [INFO][4112] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.605 [INFO][4112] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.650 [INFO][4112] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.670 [INFO][4112] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.684 [INFO][4112] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.684 [INFO][4112] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.690 [INFO][4112] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.715 [INFO][4112] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.729 [INFO][4112] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.199/26] block=192.168.99.192/26 handle="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.729 [INFO][4112] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.199/26] handle="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.729 [INFO][4112] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:37.844635 containerd[1500]: 2026-04-16 00:50:37.729 [INFO][4112] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.199/26] IPv6=[] ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" HandleID="k8s-pod-network.b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Workload="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:37.847278 containerd[1500]: 2026-04-16 00:50:37.749 [INFO][3995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"bf0b134d-2aea-439f-880e-27db3db5afb6", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-cccfbd5cf-tzjcc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e95c2e52a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.847278 containerd[1500]: 2026-04-16 00:50:37.749 [INFO][3995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.199/32] ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:37.847278 containerd[1500]: 2026-04-16 00:50:37.749 [INFO][3995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e95c2e52a1 ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:37.847278 containerd[1500]: 2026-04-16 00:50:37.790 [INFO][3995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:37.847278 containerd[1500]: 2026-04-16 00:50:37.792 [INFO][3995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"bf0b134d-2aea-439f-880e-27db3db5afb6", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a", Pod:"goldmane-cccfbd5cf-tzjcc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0e95c2e52a1", MAC:"06:9c:aa:dd:9f:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:37.847278 containerd[1500]: 2026-04-16 00:50:37.815 [INFO][3995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a" Namespace="calico-system" Pod="goldmane-cccfbd5cf-tzjcc" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-goldmane--cccfbd5cf--tzjcc-eth0" Apr 16 00:50:37.913268 containerd[1500]: time="2026-04-16T00:50:37.911409856Z" level=info msg="CreateContainer within sandbox \"54c48d5be54d8106d82f722db20e3d9ded30214fded48bf100a5a57a60f82910\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0e3ef2a59a6a398d21b18b31998cfbd1d0f5ed2e6a784c499ba7bdc468e04492\"" Apr 16 00:50:37.913737 containerd[1500]: time="2026-04-16T00:50:37.913617900Z" level=info msg="StartContainer for \"0e3ef2a59a6a398d21b18b31998cfbd1d0f5ed2e6a784c499ba7bdc468e04492\"" Apr 16 00:50:37.927782 containerd[1500]: time="2026-04-16T00:50:37.920099599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:37.927782 containerd[1500]: time="2026-04-16T00:50:37.920223301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:37.927782 containerd[1500]: time="2026-04-16T00:50:37.920249600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.927782 containerd[1500]: time="2026-04-16T00:50:37.920409507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.957277 containerd[1500]: time="2026-04-16T00:50:37.956203509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:37.957277 containerd[1500]: time="2026-04-16T00:50:37.956312346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:37.957277 containerd[1500]: time="2026-04-16T00:50:37.956335712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.957277 containerd[1500]: time="2026-04-16T00:50:37.956468923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:37.963060 containerd[1500]: time="2026-04-16T00:50:37.961075258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-qvl6m,Uid:485d8d0d-2cc8-413a-863b-a0c37bab4a01,Namespace:calico-system,Attempt:0,} returns sandbox id \"becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b\"" Apr 16 00:50:37.994969 systemd[1]: Started cri-containerd-456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11.scope - libcontainer container 456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11. Apr 16 00:50:38.038140 systemd[1]: Started cri-containerd-dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4.scope - libcontainer container dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4. Apr 16 00:50:38.044341 systemd-networkd[1433]: cali015187adcb2: Gained IPv6LL Apr 16 00:50:38.091307 kubelet[2699]: I0416 00:50:38.091260 2699 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a75fef46-43fd-43b1-8d18-836eda8a805f" path="/var/lib/kubelet/pods/a75fef46-43fd-43b1-8d18-836eda8a805f/volumes" Apr 16 00:50:38.187500 containerd[1500]: time="2026-04-16T00:50:38.187420574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-qbpkz,Uid:2042a77a-7eb0-4e4b-b526-42c43f2e5758,Namespace:kube-system,Attempt:0,} returns sandbox id \"456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11\"" Apr 16 00:50:38.199197 systemd[1]: Started cri-containerd-c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff.scope - libcontainer container c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff. Apr 16 00:50:38.233239 containerd[1500]: time="2026-04-16T00:50:38.218798752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:38.233239 containerd[1500]: time="2026-04-16T00:50:38.219755298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:38.233239 containerd[1500]: time="2026-04-16T00:50:38.219781506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:38.233239 containerd[1500]: time="2026-04-16T00:50:38.221964057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:38.260954 containerd[1500]: time="2026-04-16T00:50:38.260032333Z" level=info msg="CreateContainer within sandbox \"456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 00:50:38.279231 systemd[1]: Started cri-containerd-0e3ef2a59a6a398d21b18b31998cfbd1d0f5ed2e6a784c499ba7bdc468e04492.scope - libcontainer container 0e3ef2a59a6a398d21b18b31998cfbd1d0f5ed2e6a784c499ba7bdc468e04492. Apr 16 00:50:38.336277 containerd[1500]: time="2026-04-16T00:50:38.336226289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f5m5,Uid:2c74cad1-a379-4d46-b91e-dca9240bc056,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4\"" Apr 16 00:50:38.370773 containerd[1500]: time="2026-04-16T00:50:38.370002348Z" level=info msg="CreateContainer within sandbox \"456054bd4f59d103cbbb60d3dc4ac6f56a79c4f4505e03dc00862ff1d0f99a11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64ef12294efd58652d5d66d250de7560499a30bb372b8d5c6bd2feb8131d9a60\"" Apr 16 00:50:38.374146 systemd[1]: Started cri-containerd-b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a.scope - libcontainer container b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a. Apr 16 00:50:38.397756 containerd[1500]: time="2026-04-16T00:50:38.397099326Z" level=info msg="StartContainer for \"64ef12294efd58652d5d66d250de7560499a30bb372b8d5c6bd2feb8131d9a60\"" Apr 16 00:50:38.410918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190181825.mount: Deactivated successfully. Apr 16 00:50:38.425118 systemd-networkd[1433]: cali88b92b6cc45: Gained IPv6LL Apr 16 00:50:38.493756 containerd[1500]: time="2026-04-16T00:50:38.493195400Z" level=info msg="StartContainer for \"0e3ef2a59a6a398d21b18b31998cfbd1d0f5ed2e6a784c499ba7bdc468e04492\" returns successfully" Apr 16 00:50:38.494856 systemd-networkd[1433]: calic263d8ded77: Link UP Apr 16 00:50:38.499187 systemd-networkd[1433]: calic263d8ded77: Gained carrier Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:37.734 [ERROR][4300] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:37.874 [INFO][4300] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0 whisker-8587994885- calico-system d9796998-11a4-4044-ac5e-25ca08f8c089 931 0 2026-04-16 00:50:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8587994885 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-57yav.gb1.brightbox.com whisker-8587994885-rs2d6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic263d8ded77 [] [] }} ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:37.874 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.263 [INFO][4438] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" HandleID="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Workload="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.319 [INFO][4438] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" HandleID="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Workload="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe60), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-57yav.gb1.brightbox.com", "pod":"whisker-8587994885-rs2d6", "timestamp":"2026-04-16 00:50:38.263301048 +0000 UTC"}, Hostname:"srv-57yav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00030a000)} Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.319 [INFO][4438] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.320 [INFO][4438] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.320 [INFO][4438] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-57yav.gb1.brightbox.com' Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.330 [INFO][4438] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.352 [INFO][4438] ipam/ipam.go 409: Looking up existing affinities for host host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.382 [INFO][4438] ipam/ipam.go 526: Trying affinity for 192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.389 [INFO][4438] ipam/ipam.go 160: Attempting to load block cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.396 [INFO][4438] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.397 [INFO][4438] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.408 [INFO][4438] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21 Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.422 [INFO][4438] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.444 [INFO][4438] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.99.200/26] block=192.168.99.192/26 handle="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.444 [INFO][4438] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.99.200/26] handle="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" host="srv-57yav.gb1.brightbox.com" Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.446 [INFO][4438] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 00:50:38.542504 containerd[1500]: 2026-04-16 00:50:38.446 [INFO][4438] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.99.200/26] IPv6=[] ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" HandleID="k8s-pod-network.c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Workload="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" Apr 16 00:50:38.546551 containerd[1500]: 2026-04-16 00:50:38.468 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0", GenerateName:"whisker-8587994885-", Namespace:"calico-system", SelfLink:"", UID:"d9796998-11a4-4044-ac5e-25ca08f8c089", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8587994885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"", Pod:"whisker-8587994885-rs2d6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic263d8ded77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:38.546551 containerd[1500]: 2026-04-16 00:50:38.468 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.200/32] ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" Apr 16 00:50:38.546551 containerd[1500]: 2026-04-16 00:50:38.468 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic263d8ded77 ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" Apr 16 00:50:38.546551 containerd[1500]: 2026-04-16 00:50:38.502 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" Apr 16 00:50:38.546551 containerd[1500]: 2026-04-16 00:50:38.506 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0", GenerateName:"whisker-8587994885-", Namespace:"calico-system", SelfLink:"", UID:"d9796998-11a4-4044-ac5e-25ca08f8c089", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 0, 50, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8587994885", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-57yav.gb1.brightbox.com", ContainerID:"c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21", Pod:"whisker-8587994885-rs2d6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic263d8ded77", MAC:"d6:7b:55:34:06:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 00:50:38.546551 containerd[1500]: 2026-04-16 00:50:38.532 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21" Namespace="calico-system" Pod="whisker-8587994885-rs2d6" WorkloadEndpoint="srv--57yav.gb1.brightbox.com-k8s-whisker--8587994885--rs2d6-eth0" Apr 16 00:50:38.552152 systemd-networkd[1433]: calif334cb0b1a0: Gained IPv6LL Apr 16 00:50:38.611135 systemd[1]: Started cri-containerd-64ef12294efd58652d5d66d250de7560499a30bb372b8d5c6bd2feb8131d9a60.scope - libcontainer container 64ef12294efd58652d5d66d250de7560499a30bb372b8d5c6bd2feb8131d9a60. Apr 16 00:50:38.617181 systemd-networkd[1433]: cali6f2a9f51a42: Gained IPv6LL Apr 16 00:50:38.680295 systemd-networkd[1433]: cali391de30d6de: Gained IPv6LL Apr 16 00:50:38.681302 containerd[1500]: time="2026-04-16T00:50:38.680428305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 16 00:50:38.681302 containerd[1500]: time="2026-04-16T00:50:38.680526565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 16 00:50:38.681302 containerd[1500]: time="2026-04-16T00:50:38.680634565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:38.681302 containerd[1500]: time="2026-04-16T00:50:38.680899543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 16 00:50:38.739277 containerd[1500]: time="2026-04-16T00:50:38.739121475Z" level=info msg="StartContainer for \"64ef12294efd58652d5d66d250de7560499a30bb372b8d5c6bd2feb8131d9a60\" returns successfully" Apr 16 00:50:38.802682 containerd[1500]: time="2026-04-16T00:50:38.802499010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77f78848ff-bqnrn,Uid:b0b987cc-2500-4310-a6ca-9cb5017fbcca,Namespace:calico-system,Attempt:0,} returns sandbox id \"c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff\"" Apr 16 00:50:38.803194 systemd[1]: Started cri-containerd-c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21.scope - libcontainer container c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21. Apr 16 00:50:38.819535 kubelet[2699]: I0416 00:50:38.818240 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-92s4s" podStartSLOduration=48.818200474 podStartE2EDuration="48.818200474s" podCreationTimestamp="2026-04-16 00:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:50:38.816674434 +0000 UTC m=+54.930530704" watchObservedRunningTime="2026-04-16 00:50:38.818200474 +0000 UTC m=+54.932056733" Apr 16 00:50:38.848037 containerd[1500]: time="2026-04-16T00:50:38.847530059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-tzjcc,Uid:bf0b134d-2aea-439f-880e-27db3db5afb6,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a\"" Apr 16 00:50:38.996882 containerd[1500]: time="2026-04-16T00:50:38.995763334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8587994885-rs2d6,Uid:d9796998-11a4-4044-ac5e-25ca08f8c089,Namespace:calico-system,Attempt:0,} returns sandbox id \"c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21\"" Apr 16 00:50:39.116015 kernel: calico-node[4176]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 16 00:50:39.321179 systemd-networkd[1433]: cali9529a8a0b86: Gained IPv6LL Apr 16 00:50:39.705502 systemd-networkd[1433]: calic263d8ded77: Gained IPv6LL Apr 16 00:50:39.706674 systemd-networkd[1433]: cali0e95c2e52a1: Gained IPv6LL Apr 16 00:50:40.152148 kubelet[2699]: I0416 00:50:40.151585 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-qbpkz" podStartSLOduration=50.126547314 podStartE2EDuration="50.126547314s" podCreationTimestamp="2026-04-16 00:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 00:50:40.038253512 +0000 UTC m=+56.152109769" watchObservedRunningTime="2026-04-16 00:50:40.126547314 +0000 UTC m=+56.240403569" Apr 16 00:50:40.446191 systemd-networkd[1433]: vxlan.calico: Link UP Apr 16 00:50:40.446202 systemd-networkd[1433]: vxlan.calico: Gained carrier Apr 16 00:50:42.072255 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Apr 16 00:50:42.846052 containerd[1500]: time="2026-04-16T00:50:42.825492402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 16 00:50:42.855559 containerd[1500]: time="2026-04-16T00:50:42.855468562Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:42.870679 containerd[1500]: time="2026-04-16T00:50:42.870617169Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:42.874478 containerd[1500]: time="2026-04-16T00:50:42.874398325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:42.876119 containerd[1500]: time="2026-04-16T00:50:42.876066614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.35585105s" Apr 16 00:50:42.876237 containerd[1500]: time="2026-04-16T00:50:42.876133001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 16 00:50:42.881600 containerd[1500]: time="2026-04-16T00:50:42.881567599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 00:50:42.936968 containerd[1500]: time="2026-04-16T00:50:42.936871490Z" level=info msg="CreateContainer within sandbox \"73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 00:50:42.980681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834483224.mount: Deactivated successfully. Apr 16 00:50:43.022105 containerd[1500]: time="2026-04-16T00:50:43.022055902Z" level=info msg="CreateContainer within sandbox \"73761b4e0ded29acfbd8bfd4dbb26f45ec48c382d0b1de609e485d190e90c455\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0580ded9fd5b4ff73de547566af4e2969c5b461fce8ea41137ec803e5a1b79e7\"" Apr 16 00:50:43.023332 containerd[1500]: time="2026-04-16T00:50:43.023264731Z" level=info msg="StartContainer for \"0580ded9fd5b4ff73de547566af4e2969c5b461fce8ea41137ec803e5a1b79e7\"" Apr 16 00:50:43.135144 systemd[1]: Started cri-containerd-0580ded9fd5b4ff73de547566af4e2969c5b461fce8ea41137ec803e5a1b79e7.scope - libcontainer container 0580ded9fd5b4ff73de547566af4e2969c5b461fce8ea41137ec803e5a1b79e7. Apr 16 00:50:43.309393 containerd[1500]: time="2026-04-16T00:50:43.308764276Z" level=info msg="StartContainer for \"0580ded9fd5b4ff73de547566af4e2969c5b461fce8ea41137ec803e5a1b79e7\" returns successfully" Apr 16 00:50:44.121563 kubelet[2699]: I0416 00:50:44.121490 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bbddd6b8d-wmztx" podStartSLOduration=33.728406919 podStartE2EDuration="39.121472865s" podCreationTimestamp="2026-04-16 00:50:05 +0000 UTC" firstStartedPulling="2026-04-16 00:50:37.488152415 +0000 UTC m=+53.602008665" lastFinishedPulling="2026-04-16 00:50:42.881218365 +0000 UTC m=+58.995074611" observedRunningTime="2026-04-16 00:50:44.11712364 +0000 UTC m=+60.230979900" watchObservedRunningTime="2026-04-16 00:50:44.121472865 +0000 UTC m=+60.235329123" Apr 16 00:50:44.146130 systemd[1]: run-containerd-runc-k8s.io-0580ded9fd5b4ff73de547566af4e2969c5b461fce8ea41137ec803e5a1b79e7-runc.P2cMrx.mount: Deactivated successfully. Apr 16 00:50:47.143632 containerd[1500]: time="2026-04-16T00:50:47.143543824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:47.145509 containerd[1500]: time="2026-04-16T00:50:47.145334032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 16 00:50:47.147029 containerd[1500]: time="2026-04-16T00:50:47.146732709Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:47.151640 containerd[1500]: time="2026-04-16T00:50:47.151573254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:47.154063 containerd[1500]: time="2026-04-16T00:50:47.153240553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.271631176s" Apr 16 00:50:47.154063 containerd[1500]: time="2026-04-16T00:50:47.153317224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 00:50:47.156446 containerd[1500]: time="2026-04-16T00:50:47.156020447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 00:50:47.161839 containerd[1500]: time="2026-04-16T00:50:47.161792040Z" level=info msg="CreateContainer within sandbox \"becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 00:50:47.185914 containerd[1500]: time="2026-04-16T00:50:47.185800266Z" level=info msg="CreateContainer within sandbox \"becdb12ff02f225f146061e28abbdb47d3b8c7b9e29c9a40c6bc9fab133eae9b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3b6fa4a24c246d7e7d42be361185664f428fb7fe3abc9beece1c57e7fbb63735\"" Apr 16 00:50:47.187766 containerd[1500]: time="2026-04-16T00:50:47.187643164Z" level=info msg="StartContainer for \"3b6fa4a24c246d7e7d42be361185664f428fb7fe3abc9beece1c57e7fbb63735\"" Apr 16 00:50:47.280983 systemd[1]: run-containerd-runc-k8s.io-3b6fa4a24c246d7e7d42be361185664f428fb7fe3abc9beece1c57e7fbb63735-runc.I8YFVS.mount: Deactivated successfully. Apr 16 00:50:47.289366 systemd[1]: Started cri-containerd-3b6fa4a24c246d7e7d42be361185664f428fb7fe3abc9beece1c57e7fbb63735.scope - libcontainer container 3b6fa4a24c246d7e7d42be361185664f428fb7fe3abc9beece1c57e7fbb63735. Apr 16 00:50:47.366706 containerd[1500]: time="2026-04-16T00:50:47.366590319Z" level=info msg="StartContainer for \"3b6fa4a24c246d7e7d42be361185664f428fb7fe3abc9beece1c57e7fbb63735\" returns successfully" Apr 16 00:50:49.097327 kubelet[2699]: I0416 00:50:49.093716 2699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 00:50:49.124866 containerd[1500]: time="2026-04-16T00:50:49.124806410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:49.127798 containerd[1500]: time="2026-04-16T00:50:49.127739914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 00:50:49.128959 containerd[1500]: time="2026-04-16T00:50:49.128869398Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:49.134998 containerd[1500]: time="2026-04-16T00:50:49.134880066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:49.146038 containerd[1500]: time="2026-04-16T00:50:49.145678691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.989614829s" Apr 16 00:50:49.146038 containerd[1500]: time="2026-04-16T00:50:49.145735809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 00:50:49.164079 containerd[1500]: time="2026-04-16T00:50:49.163525356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 00:50:49.192572 containerd[1500]: time="2026-04-16T00:50:49.192199519Z" level=info msg="CreateContainer within sandbox \"dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 00:50:49.248229 containerd[1500]: time="2026-04-16T00:50:49.247476011Z" level=info msg="CreateContainer within sandbox \"dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3e025943b5cd9e172c5ebb85c83db7e467cdd4b0d0e14693645ed2d637c369d0\"" Apr 16 00:50:49.250583 containerd[1500]: time="2026-04-16T00:50:49.250288673Z" level=info msg="StartContainer for \"3e025943b5cd9e172c5ebb85c83db7e467cdd4b0d0e14693645ed2d637c369d0\"" Apr 16 00:50:49.331382 systemd[1]: Started cri-containerd-3e025943b5cd9e172c5ebb85c83db7e467cdd4b0d0e14693645ed2d637c369d0.scope - libcontainer container 3e025943b5cd9e172c5ebb85c83db7e467cdd4b0d0e14693645ed2d637c369d0. Apr 16 00:50:49.406799 containerd[1500]: time="2026-04-16T00:50:49.405809252Z" level=info msg="StartContainer for \"3e025943b5cd9e172c5ebb85c83db7e467cdd4b0d0e14693645ed2d637c369d0\" returns successfully" Apr 16 00:50:49.542212 containerd[1500]: time="2026-04-16T00:50:49.542139951Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:49.543642 containerd[1500]: time="2026-04-16T00:50:49.543592802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 00:50:49.553913 containerd[1500]: time="2026-04-16T00:50:49.552922883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 389.342237ms" Apr 16 00:50:49.553913 containerd[1500]: time="2026-04-16T00:50:49.553897123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 16 00:50:49.557103 containerd[1500]: time="2026-04-16T00:50:49.555894565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 00:50:49.566978 containerd[1500]: time="2026-04-16T00:50:49.566893143Z" level=info msg="CreateContainer within sandbox \"c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 00:50:49.589165 containerd[1500]: time="2026-04-16T00:50:49.589091864Z" level=info msg="CreateContainer within sandbox \"c51f4bfbd60113fdb8065a1e1ce01306fe4257dd6b048cec39c9a32bfddbaeff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"970d2540d5380c1ab82e6f089303e046b4ec5e1ab8080c30a38abed191bc8e45\"" Apr 16 00:50:49.591770 containerd[1500]: time="2026-04-16T00:50:49.591721209Z" level=info msg="StartContainer for \"970d2540d5380c1ab82e6f089303e046b4ec5e1ab8080c30a38abed191bc8e45\"" Apr 16 00:50:49.684732 systemd[1]: Started cri-containerd-970d2540d5380c1ab82e6f089303e046b4ec5e1ab8080c30a38abed191bc8e45.scope - libcontainer container 970d2540d5380c1ab82e6f089303e046b4ec5e1ab8080c30a38abed191bc8e45. Apr 16 00:50:49.812642 containerd[1500]: time="2026-04-16T00:50:49.811743633Z" level=info msg="StartContainer for \"970d2540d5380c1ab82e6f089303e046b4ec5e1ab8080c30a38abed191bc8e45\" returns successfully" Apr 16 00:50:50.158637 kubelet[2699]: I0416 00:50:50.151217 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-77f78848ff-qvl6m" podStartSLOduration=36.968343165 podStartE2EDuration="46.151173389s" podCreationTimestamp="2026-04-16 00:50:04 +0000 UTC" firstStartedPulling="2026-04-16 00:50:37.972855954 +0000 UTC m=+54.086712198" lastFinishedPulling="2026-04-16 00:50:47.155686172 +0000 UTC m=+63.269542422" observedRunningTime="2026-04-16 00:50:48.105752098 +0000 UTC m=+64.219608367" watchObservedRunningTime="2026-04-16 00:50:50.151173389 +0000 UTC m=+66.265029679" Apr 16 00:50:50.158637 kubelet[2699]: I0416 00:50:50.158256 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-77f78848ff-bqnrn" podStartSLOduration=35.414433141 podStartE2EDuration="46.157343719s" podCreationTimestamp="2026-04-16 00:50:04 +0000 UTC" firstStartedPulling="2026-04-16 00:50:38.812583721 +0000 UTC m=+54.926439964" lastFinishedPulling="2026-04-16 00:50:49.555494284 +0000 UTC m=+65.669350542" observedRunningTime="2026-04-16 00:50:50.148269124 +0000 UTC m=+66.262125390" watchObservedRunningTime="2026-04-16 00:50:50.157343719 +0000 UTC m=+66.271199975" Apr 16 00:50:53.825779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2679442764.mount: Deactivated successfully. Apr 16 00:50:54.713781 containerd[1500]: time="2026-04-16T00:50:54.713578422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:54.716145 containerd[1500]: time="2026-04-16T00:50:54.715859048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 16 00:50:54.718046 containerd[1500]: time="2026-04-16T00:50:54.716791827Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:54.722067 containerd[1500]: time="2026-04-16T00:50:54.722029317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:54.725301 containerd[1500]: time="2026-04-16T00:50:54.725243489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.169306642s" Apr 16 00:50:54.725396 containerd[1500]: time="2026-04-16T00:50:54.725308077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 16 00:50:54.782518 containerd[1500]: time="2026-04-16T00:50:54.780830268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 00:50:54.912142 containerd[1500]: time="2026-04-16T00:50:54.912069321Z" level=info msg="CreateContainer within sandbox \"b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 00:50:55.016203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1243685460.mount: Deactivated successfully. Apr 16 00:50:55.075964 containerd[1500]: time="2026-04-16T00:50:55.075724241Z" level=info msg="CreateContainer within sandbox \"b8be898d32f3a156124a8728d73dc353cf0164c7f3a550de306e3b5e46ed5a5a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"abfcdbd5107d2850d5eb14a41256e6eb53d22e858930bcfe7e1a83df581484bf\"" Apr 16 00:50:55.080300 containerd[1500]: time="2026-04-16T00:50:55.079449479Z" level=info msg="StartContainer for \"abfcdbd5107d2850d5eb14a41256e6eb53d22e858930bcfe7e1a83df581484bf\"" Apr 16 00:50:55.232173 systemd[1]: run-containerd-runc-k8s.io-0580ded9fd5b4ff73de547566af4e2969c5b461fce8ea41137ec803e5a1b79e7-runc.5mVw3J.mount: Deactivated successfully. Apr 16 00:50:55.245162 systemd[1]: Started cri-containerd-abfcdbd5107d2850d5eb14a41256e6eb53d22e858930bcfe7e1a83df581484bf.scope - libcontainer container abfcdbd5107d2850d5eb14a41256e6eb53d22e858930bcfe7e1a83df581484bf. Apr 16 00:50:55.386711 containerd[1500]: time="2026-04-16T00:50:55.386532128Z" level=info msg="StartContainer for \"abfcdbd5107d2850d5eb14a41256e6eb53d22e858930bcfe7e1a83df581484bf\" returns successfully" Apr 16 00:50:55.773415 kubelet[2699]: I0416 00:50:55.772753 2699 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 00:50:56.238557 kubelet[2699]: I0416 00:50:56.231071 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-tzjcc" podStartSLOduration=36.301564318 podStartE2EDuration="52.23089261s" podCreationTimestamp="2026-04-16 00:50:04 +0000 UTC" firstStartedPulling="2026-04-16 00:50:38.851225698 +0000 UTC m=+54.965081941" lastFinishedPulling="2026-04-16 00:50:54.780553972 +0000 UTC m=+70.894410233" observedRunningTime="2026-04-16 00:50:56.228758047 +0000 UTC m=+72.342614306" watchObservedRunningTime="2026-04-16 00:50:56.23089261 +0000 UTC m=+72.344748886" Apr 16 00:50:56.258124 systemd[1]: run-containerd-runc-k8s.io-abfcdbd5107d2850d5eb14a41256e6eb53d22e858930bcfe7e1a83df581484bf-runc.vB0b5A.mount: Deactivated successfully. Apr 16 00:50:57.171275 containerd[1500]: time="2026-04-16T00:50:57.170112256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:57.171989 containerd[1500]: time="2026-04-16T00:50:57.171904354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 16 00:50:57.173047 containerd[1500]: time="2026-04-16T00:50:57.173005198Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:57.176386 containerd[1500]: time="2026-04-16T00:50:57.176355597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:57.177760 containerd[1500]: time="2026-04-16T00:50:57.177718103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.39629994s" Apr 16 00:50:57.178356 containerd[1500]: time="2026-04-16T00:50:57.177762178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 16 00:50:57.197637 containerd[1500]: time="2026-04-16T00:50:57.197578113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 00:50:57.212679 containerd[1500]: time="2026-04-16T00:50:57.212498312Z" level=info msg="CreateContainer within sandbox \"c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 00:50:57.384015 containerd[1500]: time="2026-04-16T00:50:57.383919809Z" level=info msg="CreateContainer within sandbox \"c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"bed16988737e6a5e6aabd8ca7dbe6b059b9b79817b5129c1b412f20f18ef4af0\"" Apr 16 00:50:57.394546 containerd[1500]: time="2026-04-16T00:50:57.393331363Z" level=info msg="StartContainer for \"bed16988737e6a5e6aabd8ca7dbe6b059b9b79817b5129c1b412f20f18ef4af0\"" Apr 16 00:50:57.452122 systemd[1]: Started cri-containerd-bed16988737e6a5e6aabd8ca7dbe6b059b9b79817b5129c1b412f20f18ef4af0.scope - libcontainer container bed16988737e6a5e6aabd8ca7dbe6b059b9b79817b5129c1b412f20f18ef4af0. Apr 16 00:50:57.575290 containerd[1500]: time="2026-04-16T00:50:57.575080639Z" level=info msg="StartContainer for \"bed16988737e6a5e6aabd8ca7dbe6b059b9b79817b5129c1b412f20f18ef4af0\" returns successfully" Apr 16 00:50:59.451612 containerd[1500]: time="2026-04-16T00:50:59.451500352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:59.453417 containerd[1500]: time="2026-04-16T00:50:59.453350796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 00:50:59.454681 containerd[1500]: time="2026-04-16T00:50:59.454622295Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:59.459291 containerd[1500]: time="2026-04-16T00:50:59.459187169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:50:59.460784 containerd[1500]: time="2026-04-16T00:50:59.460642154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.262991145s" Apr 16 00:50:59.460784 containerd[1500]: time="2026-04-16T00:50:59.460693901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 00:50:59.464364 containerd[1500]: time="2026-04-16T00:50:59.463883749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 00:50:59.471687 containerd[1500]: time="2026-04-16T00:50:59.470864356Z" level=info msg="CreateContainer within sandbox \"dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 00:50:59.494347 containerd[1500]: time="2026-04-16T00:50:59.494291571Z" level=info msg="CreateContainer within sandbox \"dcb1615527bec82d7f9abbc1851334f4d974e27c84afc07cbf8b5053897ec9d4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47bd3be05d427ba085f7158a4c943bbd16e6829e4aa359976b97c155683afd9a\"" Apr 16 00:50:59.496862 containerd[1500]: time="2026-04-16T00:50:59.496829756Z" level=info msg="StartContainer for \"47bd3be05d427ba085f7158a4c943bbd16e6829e4aa359976b97c155683afd9a\"" Apr 16 00:50:59.580158 systemd[1]: Started cri-containerd-47bd3be05d427ba085f7158a4c943bbd16e6829e4aa359976b97c155683afd9a.scope - libcontainer container 47bd3be05d427ba085f7158a4c943bbd16e6829e4aa359976b97c155683afd9a. Apr 16 00:50:59.625121 containerd[1500]: time="2026-04-16T00:50:59.625068631Z" level=info msg="StartContainer for \"47bd3be05d427ba085f7158a4c943bbd16e6829e4aa359976b97c155683afd9a\" returns successfully" Apr 16 00:51:00.397124 kubelet[2699]: I0416 00:51:00.394743 2699 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 00:51:00.398531 kubelet[2699]: I0416 00:51:00.398474 2699 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 00:51:01.989054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968590525.mount: Deactivated successfully. Apr 16 00:51:02.008858 containerd[1500]: time="2026-04-16T00:51:02.008776408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:51:02.010983 containerd[1500]: time="2026-04-16T00:51:02.010917443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 16 00:51:02.012627 containerd[1500]: time="2026-04-16T00:51:02.012585265Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:51:02.015689 containerd[1500]: time="2026-04-16T00:51:02.015653361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 00:51:02.017234 containerd[1500]: time="2026-04-16T00:51:02.017162764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.553236886s" Apr 16 00:51:02.017315 containerd[1500]: time="2026-04-16T00:51:02.017240137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 16 00:51:02.047563 containerd[1500]: time="2026-04-16T00:51:02.047500668Z" level=info msg="CreateContainer within sandbox \"c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 00:51:02.068256 containerd[1500]: time="2026-04-16T00:51:02.068173842Z" level=info msg="CreateContainer within sandbox \"c90af883c4fe07127bbecc51cc3ee7972a48397760e3b7009b6dab1c8c0e8d21\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"85fd693c5bef89f68d16f374efac73427410efc8eb328151fcae8e1fe4752c60\"" Apr 16 00:51:02.071369 containerd[1500]: time="2026-04-16T00:51:02.071120717Z" level=info msg="StartContainer for \"85fd693c5bef89f68d16f374efac73427410efc8eb328151fcae8e1fe4752c60\"" Apr 16 00:51:02.147711 systemd[1]: run-containerd-runc-k8s.io-85fd693c5bef89f68d16f374efac73427410efc8eb328151fcae8e1fe4752c60-runc.rbXYn2.mount: Deactivated successfully. Apr 16 00:51:02.157107 systemd[1]: Started cri-containerd-85fd693c5bef89f68d16f374efac73427410efc8eb328151fcae8e1fe4752c60.scope - libcontainer container 85fd693c5bef89f68d16f374efac73427410efc8eb328151fcae8e1fe4752c60. Apr 16 00:51:02.232498 containerd[1500]: time="2026-04-16T00:51:02.232277841Z" level=info msg="StartContainer for \"85fd693c5bef89f68d16f374efac73427410efc8eb328151fcae8e1fe4752c60\" returns successfully" Apr 16 00:51:03.284685 kubelet[2699]: I0416 00:51:03.279811 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4f5m5" podStartSLOduration=37.157164427 podStartE2EDuration="58.279781577s" podCreationTimestamp="2026-04-16 00:50:05 +0000 UTC" firstStartedPulling="2026-04-16 00:50:38.340106824 +0000 UTC m=+54.453963075" lastFinishedPulling="2026-04-16 00:50:59.462723969 +0000 UTC m=+75.576580225" observedRunningTime="2026-04-16 00:51:00.284321303 +0000 UTC m=+76.398177575" watchObservedRunningTime="2026-04-16 00:51:03.279781577 +0000 UTC m=+79.393637831" Apr 16 00:51:03.284685 kubelet[2699]: I0416 00:51:03.283493 2699 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8587994885-rs2d6" podStartSLOduration=4.270203141 podStartE2EDuration="27.283474391s" podCreationTimestamp="2026-04-16 00:50:36 +0000 UTC" firstStartedPulling="2026-04-16 00:50:39.005382836 +0000 UTC m=+55.119239080" lastFinishedPulling="2026-04-16 00:51:02.01865408 +0000 UTC m=+78.132510330" observedRunningTime="2026-04-16 00:51:03.27870393 +0000 UTC m=+79.392560206" watchObservedRunningTime="2026-04-16 00:51:03.283474391 +0000 UTC m=+79.397330649" Apr 16 00:51:11.393287 systemd[1]: Started sshd@9-10.230.47.154:22-20.229.252.112:49958.service - OpenSSH per-connection server daemon (20.229.252.112:49958). Apr 16 00:51:11.645845 sshd[5417]: Accepted publickey for core from 20.229.252.112 port 49958 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:11.649103 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:11.661155 systemd-logind[1485]: New session 12 of user core. Apr 16 00:51:11.668426 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 00:51:12.492262 sshd[5417]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:12.521689 systemd[1]: sshd@9-10.230.47.154:22-20.229.252.112:49958.service: Deactivated successfully. Apr 16 00:51:12.527654 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 00:51:12.529308 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Apr 16 00:51:12.535265 systemd-logind[1485]: Removed session 12. Apr 16 00:51:17.526334 systemd[1]: Started sshd@10-10.230.47.154:22-20.229.252.112:45574.service - OpenSSH per-connection server daemon (20.229.252.112:45574). Apr 16 00:51:17.692861 sshd[5455]: Accepted publickey for core from 20.229.252.112 port 45574 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:17.693712 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:17.705603 systemd-logind[1485]: New session 13 of user core. Apr 16 00:51:17.713282 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 00:51:17.947710 sshd[5455]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:17.955678 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Apr 16 00:51:17.957142 systemd[1]: sshd@10-10.230.47.154:22-20.229.252.112:45574.service: Deactivated successfully. Apr 16 00:51:17.961746 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 00:51:17.964427 systemd-logind[1485]: Removed session 13. Apr 16 00:51:22.986275 systemd[1]: Started sshd@11-10.230.47.154:22-20.229.252.112:45576.service - OpenSSH per-connection server daemon (20.229.252.112:45576). Apr 16 00:51:23.162996 sshd[5483]: Accepted publickey for core from 20.229.252.112 port 45576 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:23.165926 sshd[5483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:23.173688 systemd-logind[1485]: New session 14 of user core. Apr 16 00:51:23.180449 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 00:51:23.397637 sshd[5483]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:23.404566 systemd[1]: sshd@11-10.230.47.154:22-20.229.252.112:45576.service: Deactivated successfully. Apr 16 00:51:23.408264 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 00:51:23.410050 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Apr 16 00:51:23.411396 systemd-logind[1485]: Removed session 14. Apr 16 00:51:28.439344 systemd[1]: Started sshd@12-10.230.47.154:22-20.229.252.112:34868.service - OpenSSH per-connection server daemon (20.229.252.112:34868). Apr 16 00:51:28.612610 sshd[5518]: Accepted publickey for core from 20.229.252.112 port 34868 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:28.614955 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:28.622652 systemd-logind[1485]: New session 15 of user core. Apr 16 00:51:28.632160 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 00:51:28.892125 sshd[5518]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:28.897809 systemd[1]: sshd@12-10.230.47.154:22-20.229.252.112:34868.service: Deactivated successfully. Apr 16 00:51:28.902864 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 00:51:28.905458 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Apr 16 00:51:28.907540 systemd-logind[1485]: Removed session 15. Apr 16 00:51:33.924293 systemd[1]: Started sshd@13-10.230.47.154:22-20.229.252.112:34878.service - OpenSSH per-connection server daemon (20.229.252.112:34878). Apr 16 00:51:34.077430 sshd[5532]: Accepted publickey for core from 20.229.252.112 port 34878 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:34.078755 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:34.085955 systemd-logind[1485]: New session 16 of user core. Apr 16 00:51:34.094176 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 00:51:34.350177 sshd[5532]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:34.358630 systemd[1]: sshd@13-10.230.47.154:22-20.229.252.112:34878.service: Deactivated successfully. Apr 16 00:51:34.362525 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 00:51:34.364058 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Apr 16 00:51:34.365628 systemd-logind[1485]: Removed session 16. Apr 16 00:51:39.377722 systemd[1]: Started sshd@14-10.230.47.154:22-20.229.252.112:54510.service - OpenSSH per-connection server daemon (20.229.252.112:54510). Apr 16 00:51:39.597982 sshd[5584]: Accepted publickey for core from 20.229.252.112 port 54510 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:39.600610 sshd[5584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:39.607572 systemd-logind[1485]: New session 17 of user core. Apr 16 00:51:39.618232 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 00:51:39.865580 sshd[5584]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:39.872253 systemd[1]: sshd@14-10.230.47.154:22-20.229.252.112:54510.service: Deactivated successfully. Apr 16 00:51:39.877279 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 00:51:39.878565 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Apr 16 00:51:39.879902 systemd-logind[1485]: Removed session 17. Apr 16 00:51:39.899274 systemd[1]: Started sshd@15-10.230.47.154:22-20.229.252.112:54518.service - OpenSSH per-connection server daemon (20.229.252.112:54518). Apr 16 00:51:40.033851 sshd[5597]: Accepted publickey for core from 20.229.252.112 port 54518 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:40.037412 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:40.046976 systemd-logind[1485]: New session 18 of user core. Apr 16 00:51:40.051170 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 00:51:40.335241 sshd[5597]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:40.345643 systemd[1]: sshd@15-10.230.47.154:22-20.229.252.112:54518.service: Deactivated successfully. Apr 16 00:51:40.352352 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 00:51:40.357592 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Apr 16 00:51:40.378557 systemd[1]: Started sshd@16-10.230.47.154:22-20.229.252.112:54534.service - OpenSSH per-connection server daemon (20.229.252.112:54534). Apr 16 00:51:40.380766 systemd-logind[1485]: Removed session 18. Apr 16 00:51:40.523601 sshd[5607]: Accepted publickey for core from 20.229.252.112 port 54534 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:40.525953 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:40.533624 systemd-logind[1485]: New session 19 of user core. Apr 16 00:51:40.539149 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 00:51:40.809340 sshd[5607]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:40.817068 systemd[1]: sshd@16-10.230.47.154:22-20.229.252.112:54534.service: Deactivated successfully. Apr 16 00:51:40.823846 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 00:51:40.826321 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Apr 16 00:51:40.829112 systemd-logind[1485]: Removed session 19. Apr 16 00:51:45.846630 systemd[1]: Started sshd@17-10.230.47.154:22-20.229.252.112:33882.service - OpenSSH per-connection server daemon (20.229.252.112:33882). Apr 16 00:51:46.000255 sshd[5641]: Accepted publickey for core from 20.229.252.112 port 33882 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:46.003217 sshd[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:46.010003 systemd-logind[1485]: New session 20 of user core. Apr 16 00:51:46.014165 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 00:51:46.322676 sshd[5641]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:46.327713 systemd[1]: sshd@17-10.230.47.154:22-20.229.252.112:33882.service: Deactivated successfully. Apr 16 00:51:46.330494 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 00:51:46.332104 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Apr 16 00:51:46.333727 systemd-logind[1485]: Removed session 20. Apr 16 00:51:51.357275 systemd[1]: Started sshd@18-10.230.47.154:22-20.229.252.112:33892.service - OpenSSH per-connection server daemon (20.229.252.112:33892). Apr 16 00:51:51.506991 sshd[5654]: Accepted publickey for core from 20.229.252.112 port 33892 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:51.509222 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:51.517637 systemd-logind[1485]: New session 21 of user core. Apr 16 00:51:51.524151 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 00:51:51.751568 sshd[5654]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:51.756727 systemd[1]: sshd@18-10.230.47.154:22-20.229.252.112:33892.service: Deactivated successfully. Apr 16 00:51:51.760359 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 00:51:51.762120 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Apr 16 00:51:51.763982 systemd-logind[1485]: Removed session 21. Apr 16 00:51:51.780360 systemd[1]: Started sshd@19-10.230.47.154:22-20.229.252.112:33902.service - OpenSSH per-connection server daemon (20.229.252.112:33902). Apr 16 00:51:51.927508 sshd[5669]: Accepted publickey for core from 20.229.252.112 port 33902 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:51.928451 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:51.935169 systemd-logind[1485]: New session 22 of user core. Apr 16 00:51:51.942346 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 00:51:52.543492 sshd[5669]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:52.553563 systemd[1]: sshd@19-10.230.47.154:22-20.229.252.112:33902.service: Deactivated successfully. Apr 16 00:51:52.556431 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 00:51:52.570179 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Apr 16 00:51:52.574276 systemd[1]: Started sshd@20-10.230.47.154:22-20.229.252.112:33906.service - OpenSSH per-connection server daemon (20.229.252.112:33906). Apr 16 00:51:52.577785 systemd-logind[1485]: Removed session 22. Apr 16 00:51:52.733863 sshd[5680]: Accepted publickey for core from 20.229.252.112 port 33906 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:52.736518 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:52.743351 systemd-logind[1485]: New session 23 of user core. Apr 16 00:51:52.747154 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 00:51:53.814589 sshd[5680]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:53.822660 systemd[1]: sshd@20-10.230.47.154:22-20.229.252.112:33906.service: Deactivated successfully. Apr 16 00:51:53.827322 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 00:51:53.832199 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Apr 16 00:51:53.858148 systemd[1]: Started sshd@21-10.230.47.154:22-20.229.252.112:33922.service - OpenSSH per-connection server daemon (20.229.252.112:33922). Apr 16 00:51:53.859077 systemd-logind[1485]: Removed session 23. Apr 16 00:51:54.028770 sshd[5699]: Accepted publickey for core from 20.229.252.112 port 33922 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:54.030053 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:54.037746 systemd-logind[1485]: New session 24 of user core. Apr 16 00:51:54.047167 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 00:51:54.773444 sshd[5699]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:54.779550 systemd[1]: sshd@21-10.230.47.154:22-20.229.252.112:33922.service: Deactivated successfully. Apr 16 00:51:54.786225 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 00:51:54.788475 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Apr 16 00:51:54.808278 systemd[1]: Started sshd@22-10.230.47.154:22-20.229.252.112:33930.service - OpenSSH per-connection server daemon (20.229.252.112:33930). Apr 16 00:51:54.809906 systemd-logind[1485]: Removed session 24. Apr 16 00:51:54.967639 sshd[5715]: Accepted publickey for core from 20.229.252.112 port 33930 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:51:54.969808 sshd[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:51:54.977016 systemd-logind[1485]: New session 25 of user core. Apr 16 00:51:54.982119 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 00:51:55.211026 sshd[5715]: pam_unix(sshd:session): session closed for user core Apr 16 00:51:55.217714 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Apr 16 00:51:55.218533 systemd[1]: sshd@22-10.230.47.154:22-20.229.252.112:33930.service: Deactivated successfully. Apr 16 00:51:55.222689 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 00:51:55.230892 systemd-logind[1485]: Removed session 25. Apr 16 00:51:58.249769 systemd[1]: run-containerd-runc-k8s.io-abfcdbd5107d2850d5eb14a41256e6eb53d22e858930bcfe7e1a83df581484bf-runc.wRFFhg.mount: Deactivated successfully. Apr 16 00:52:00.244335 systemd[1]: Started sshd@23-10.230.47.154:22-20.229.252.112:49162.service - OpenSSH per-connection server daemon (20.229.252.112:49162). Apr 16 00:52:00.425089 sshd[5768]: Accepted publickey for core from 20.229.252.112 port 49162 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:52:00.428165 sshd[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:52:00.434806 systemd-logind[1485]: New session 26 of user core. Apr 16 00:52:00.443290 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 00:52:00.666113 sshd[5768]: pam_unix(sshd:session): session closed for user core Apr 16 00:52:00.671477 systemd[1]: sshd@23-10.230.47.154:22-20.229.252.112:49162.service: Deactivated successfully. Apr 16 00:52:00.674320 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 00:52:00.675526 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Apr 16 00:52:00.676850 systemd-logind[1485]: Removed session 26. Apr 16 00:52:05.621889 systemd[1]: run-containerd-runc-k8s.io-5012a8f8233c984653d10b1163832617e0f5535b57f3c1b2860bed23363e8ae4-runc.YV8jBf.mount: Deactivated successfully. Apr 16 00:52:05.702517 systemd[1]: Started sshd@24-10.230.47.154:22-20.229.252.112:57026.service - OpenSSH per-connection server daemon (20.229.252.112:57026). Apr 16 00:52:05.872910 sshd[5841]: Accepted publickey for core from 20.229.252.112 port 57026 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:52:05.875397 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:52:05.885772 systemd-logind[1485]: New session 27 of user core. Apr 16 00:52:05.891189 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 00:52:06.216257 sshd[5841]: pam_unix(sshd:session): session closed for user core Apr 16 00:52:06.221799 systemd[1]: sshd@24-10.230.47.154:22-20.229.252.112:57026.service: Deactivated successfully. Apr 16 00:52:06.226083 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 00:52:06.227643 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Apr 16 00:52:06.229207 systemd-logind[1485]: Removed session 27. Apr 16 00:52:11.254363 systemd[1]: Started sshd@25-10.230.47.154:22-20.229.252.112:57042.service - OpenSSH per-connection server daemon (20.229.252.112:57042). Apr 16 00:52:11.413804 sshd[5856]: Accepted publickey for core from 20.229.252.112 port 57042 ssh2: RSA SHA256:3N/pu9osWgWh2yi+ae9FF0gog3nLKKRqJHJq7GNPHLE Apr 16 00:52:11.417128 sshd[5856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 00:52:11.424270 systemd-logind[1485]: New session 28 of user core. Apr 16 00:52:11.430205 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 00:52:11.667126 sshd[5856]: pam_unix(sshd:session): session closed for user core Apr 16 00:52:11.674219 systemd-logind[1485]: Session 28 logged out. Waiting for processes to exit. Apr 16 00:52:11.674970 systemd[1]: sshd@25-10.230.47.154:22-20.229.252.112:57042.service: Deactivated successfully. Apr 16 00:52:11.677265 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 00:52:11.679383 systemd-logind[1485]: Removed session 28.