Sep 10 06:54:08.955479 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Sep 10 03:32:41 -00 2025 Sep 10 06:54:08.955531 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 06:54:08.955551 kernel: BIOS-provided physical RAM map: Sep 10 06:54:08.955563 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 10 06:54:08.955572 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 10 06:54:08.955583 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 10 06:54:08.955594 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 10 06:54:08.955605 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 10 06:54:08.955616 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 06:54:08.955626 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 10 06:54:08.955641 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 10 06:54:08.955651 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 10 06:54:08.955662 kernel: NX (Execute Disable) protection: active Sep 10 06:54:08.955672 kernel: APIC: Static calls initialized Sep 10 06:54:08.955684 kernel: SMBIOS 2.8 present. Sep 10 06:54:08.955696 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 10 06:54:08.955712 kernel: DMI: Memory slots populated: 1/1 Sep 10 06:54:08.955723 kernel: Hypervisor detected: KVM Sep 10 06:54:08.955734 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 06:54:08.955745 kernel: kvm-clock: using sched offset of 5778191152 cycles Sep 10 06:54:08.955757 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 06:54:08.955769 kernel: tsc: Detected 2499.998 MHz processor Sep 10 06:54:08.955781 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 06:54:08.955793 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 06:54:08.955804 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 10 06:54:08.955820 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 10 06:54:08.955832 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 06:54:08.955843 kernel: Using GB pages for direct mapping Sep 10 06:54:08.955855 kernel: ACPI: Early table checksum verification disabled Sep 10 06:54:08.955867 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 10 06:54:08.955878 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 06:54:08.955890 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 06:54:08.955901 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 06:54:08.955912 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 10 06:54:08.955928 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 06:54:08.955940 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 06:54:08.955952 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 06:54:08.955963 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 06:54:08.955974 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 10 06:54:08.955986 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 10 06:54:08.956003 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 10 06:54:08.956019 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 10 06:54:08.956031 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 10 06:54:08.956043 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 10 06:54:08.956055 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 10 06:54:08.956067 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 10 06:54:08.956896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 10 06:54:08.956909 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 10 06:54:08.956929 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Sep 10 06:54:08.956941 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Sep 10 06:54:08.956954 kernel: Zone ranges: Sep 10 06:54:08.956966 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 06:54:08.956978 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 10 06:54:08.956990 kernel: Normal empty Sep 10 06:54:08.957008 kernel: Device empty Sep 10 06:54:08.957022 kernel: Movable zone start for each node Sep 10 06:54:08.957034 kernel: Early memory node ranges Sep 10 06:54:08.957050 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 10 06:54:08.957062 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 10 06:54:08.957087 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 10 06:54:08.957100 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 06:54:08.957112 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 10 06:54:08.957124 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 10 06:54:08.957136 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 06:54:08.957148 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 06:54:08.957160 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 06:54:08.957172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 06:54:08.957189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 06:54:08.957201 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 06:54:08.957213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 06:54:08.957225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 06:54:08.957237 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 06:54:08.957248 kernel: TSC deadline timer available Sep 10 06:54:08.957260 kernel: CPU topo: Max. logical packages: 16 Sep 10 06:54:08.957272 kernel: CPU topo: Max. logical dies: 16 Sep 10 06:54:08.957284 kernel: CPU topo: Max. dies per package: 1 Sep 10 06:54:08.957300 kernel: CPU topo: Max. threads per core: 1 Sep 10 06:54:08.957312 kernel: CPU topo: Num. cores per package: 1 Sep 10 06:54:08.957324 kernel: CPU topo: Num. threads per package: 1 Sep 10 06:54:08.957336 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Sep 10 06:54:08.957348 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 06:54:08.957360 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 10 06:54:08.957372 kernel: Booting paravirtualized kernel on KVM Sep 10 06:54:08.957384 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 06:54:08.957396 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 10 06:54:08.957412 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Sep 10 06:54:08.957425 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Sep 10 06:54:08.957436 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 10 06:54:08.957448 kernel: kvm-guest: PV spinlocks enabled Sep 10 06:54:08.957473 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 06:54:08.957488 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 06:54:08.957501 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 06:54:08.957512 kernel: random: crng init done Sep 10 06:54:08.957530 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 06:54:08.957542 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 10 06:54:08.957554 kernel: Fallback order for Node 0: 0 Sep 10 06:54:08.957566 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Sep 10 06:54:08.957577 kernel: Policy zone: DMA32 Sep 10 06:54:08.957589 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 06:54:08.957601 kernel: software IO TLB: area num 16. Sep 10 06:54:08.957613 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 10 06:54:08.957625 kernel: Kernel/User page tables isolation: enabled Sep 10 06:54:08.957641 kernel: ftrace: allocating 40102 entries in 157 pages Sep 10 06:54:08.957653 kernel: ftrace: allocated 157 pages with 5 groups Sep 10 06:54:08.957665 kernel: Dynamic Preempt: voluntary Sep 10 06:54:08.957676 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 06:54:08.957689 kernel: rcu: RCU event tracing is enabled. Sep 10 06:54:08.957702 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 10 06:54:08.957714 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 06:54:08.957726 kernel: Rude variant of Tasks RCU enabled. Sep 10 06:54:08.957738 kernel: Tracing variant of Tasks RCU enabled. Sep 10 06:54:08.957754 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 06:54:08.957767 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 10 06:54:08.957779 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 10 06:54:08.957791 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 10 06:54:08.957803 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 10 06:54:08.957815 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 10 06:54:08.957827 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 06:54:08.957853 kernel: Console: colour VGA+ 80x25 Sep 10 06:54:08.957866 kernel: printk: legacy console [tty0] enabled Sep 10 06:54:08.957879 kernel: printk: legacy console [ttyS0] enabled Sep 10 06:54:08.957891 kernel: ACPI: Core revision 20240827 Sep 10 06:54:08.957903 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 06:54:08.957920 kernel: x2apic enabled Sep 10 06:54:08.957932 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 06:54:08.957945 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 10 06:54:08.957958 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 10 06:54:08.957970 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 06:54:08.957987 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 10 06:54:08.957999 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 10 06:54:08.958012 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 06:54:08.958024 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 06:54:08.958036 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 06:54:08.958049 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 10 06:54:08.958062 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 06:54:08.960380 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 06:54:08.960397 kernel: MDS: Mitigation: Clear CPU buffers Sep 10 06:54:08.960409 kernel: MMIO Stale Data: Unknown: No mitigations Sep 10 06:54:08.960429 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 10 06:54:08.960441 kernel: active return thunk: its_return_thunk Sep 10 06:54:08.960464 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 10 06:54:08.960480 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 06:54:08.960492 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 06:54:08.960505 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 06:54:08.960517 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 06:54:08.960538 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 10 06:54:08.960551 kernel: Freeing SMP alternatives memory: 32K Sep 10 06:54:08.960564 kernel: pid_max: default: 32768 minimum: 301 Sep 10 06:54:08.960576 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 06:54:08.960595 kernel: landlock: Up and running. Sep 10 06:54:08.960607 kernel: SELinux: Initializing. Sep 10 06:54:08.960620 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 10 06:54:08.960632 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 10 06:54:08.960645 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 10 06:54:08.960658 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 10 06:54:08.960670 kernel: signal: max sigframe size: 1776 Sep 10 06:54:08.960683 kernel: rcu: Hierarchical SRCU implementation. Sep 10 06:54:08.960696 kernel: rcu: Max phase no-delay instances is 400. Sep 10 06:54:08.960709 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Sep 10 06:54:08.960726 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 10 06:54:08.960739 kernel: smp: Bringing up secondary CPUs ... Sep 10 06:54:08.960751 kernel: smpboot: x86: Booting SMP configuration: Sep 10 06:54:08.960764 kernel: .... node #0, CPUs: #1 Sep 10 06:54:08.960776 kernel: smp: Brought up 1 node, 2 CPUs Sep 10 06:54:08.960789 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 10 06:54:08.960802 kernel: Memory: 1895672K/2096616K available (14336K kernel code, 2428K rwdata, 9988K rodata, 54068K init, 2900K bss, 194936K reserved, 0K cma-reserved) Sep 10 06:54:08.960815 kernel: devtmpfs: initialized Sep 10 06:54:08.960827 kernel: x86/mm: Memory block size: 128MB Sep 10 06:54:08.960844 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 06:54:08.960857 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 10 06:54:08.960869 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 06:54:08.960882 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 06:54:08.960895 kernel: audit: initializing netlink subsys (disabled) Sep 10 06:54:08.960907 kernel: audit: type=2000 audit(1757487245.315:1): state=initialized audit_enabled=0 res=1 Sep 10 06:54:08.960920 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 06:54:08.960932 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 06:54:08.960945 kernel: cpuidle: using governor menu Sep 10 06:54:08.960962 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 06:54:08.960974 kernel: dca service started, version 1.12.1 Sep 10 06:54:08.960987 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Sep 10 06:54:08.960999 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 10 06:54:08.961012 kernel: PCI: Using configuration type 1 for base access Sep 10 06:54:08.961024 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 06:54:08.961037 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 06:54:08.961049 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 06:54:08.961062 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 06:54:08.961094 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 06:54:08.961108 kernel: ACPI: Added _OSI(Module Device) Sep 10 06:54:08.961120 kernel: ACPI: Added _OSI(Processor Device) Sep 10 06:54:08.961133 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 06:54:08.961146 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 06:54:08.961158 kernel: ACPI: Interpreter enabled Sep 10 06:54:08.961170 kernel: ACPI: PM: (supports S0 S5) Sep 10 06:54:08.961183 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 06:54:08.961195 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 06:54:08.961213 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 06:54:08.961225 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 06:54:08.961238 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 06:54:08.961544 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 06:54:08.961717 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 06:54:08.961884 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 06:54:08.961904 kernel: PCI host bridge to bus 0000:00 Sep 10 06:54:08.963150 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 06:54:08.963322 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 06:54:08.963493 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 06:54:08.963644 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 10 06:54:08.963793 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 06:54:08.963941 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 10 06:54:08.971235 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 06:54:08.971494 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 10 06:54:08.971698 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Sep 10 06:54:08.971867 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Sep 10 06:54:08.972031 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Sep 10 06:54:08.972224 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Sep 10 06:54:08.972389 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 06:54:08.972598 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.972772 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Sep 10 06:54:08.972937 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 10 06:54:08.973197 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 10 06:54:08.973366 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 10 06:54:08.973568 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.973735 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Sep 10 06:54:08.973901 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 10 06:54:08.974090 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 10 06:54:08.974261 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 10 06:54:08.974446 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.974626 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Sep 10 06:54:08.974791 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 10 06:54:08.974963 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 10 06:54:08.975153 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 10 06:54:08.975337 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.975516 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Sep 10 06:54:08.975682 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 10 06:54:08.975845 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 10 06:54:08.976008 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 10 06:54:08.976293 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.976474 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Sep 10 06:54:08.976649 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 10 06:54:08.976812 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 10 06:54:08.976976 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 10 06:54:08.977169 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.977337 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Sep 10 06:54:08.977516 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 10 06:54:08.977681 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 10 06:54:08.977854 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 10 06:54:08.978029 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.978241 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Sep 10 06:54:08.978407 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 10 06:54:08.978586 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 10 06:54:08.978751 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 10 06:54:08.978934 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Sep 10 06:54:08.979118 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Sep 10 06:54:08.979285 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 10 06:54:08.979447 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 10 06:54:08.979629 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 10 06:54:08.979811 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 10 06:54:08.979977 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Sep 10 06:54:08.981942 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Sep 10 06:54:08.982176 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Sep 10 06:54:08.983316 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Sep 10 06:54:08.983517 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 10 06:54:08.983689 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Sep 10 06:54:08.983858 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Sep 10 06:54:08.984025 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Sep 10 06:54:08.984231 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 10 06:54:08.984399 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 06:54:08.984597 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 10 06:54:08.984765 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Sep 10 06:54:08.984955 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Sep 10 06:54:08.988174 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 10 06:54:08.988351 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Sep 10 06:54:08.988579 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Sep 10 06:54:08.988753 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Sep 10 06:54:08.988923 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 10 06:54:08.989109 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 10 06:54:08.989278 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 10 06:54:08.989482 kernel: pci_bus 0000:02: extended config space not accessible Sep 10 06:54:08.989684 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Sep 10 06:54:08.989861 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Sep 10 06:54:08.990032 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 10 06:54:08.990270 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Sep 10 06:54:08.990443 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Sep 10 06:54:08.990625 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 10 06:54:08.990809 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Sep 10 06:54:08.991000 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Sep 10 06:54:08.991291 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 10 06:54:08.991491 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 10 06:54:08.991663 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 10 06:54:08.991832 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 10 06:54:08.991999 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 10 06:54:08.992195 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 10 06:54:08.992224 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 06:54:08.992238 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 06:54:08.992251 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 06:54:08.992264 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 06:54:08.992277 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 06:54:08.992290 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 06:54:08.992302 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 06:54:08.992315 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 06:54:08.992333 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 06:54:08.992346 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 06:54:08.992359 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 06:54:08.992372 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 06:54:08.992384 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 06:54:08.992397 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 06:54:08.992410 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 06:54:08.992422 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 06:54:08.992435 kernel: iommu: Default domain type: Translated Sep 10 06:54:08.992452 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 06:54:08.992479 kernel: PCI: Using ACPI for IRQ routing Sep 10 06:54:08.992492 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 06:54:08.992505 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 10 06:54:08.992517 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 10 06:54:08.992682 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 06:54:08.992846 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 06:54:08.993007 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 06:54:08.993027 kernel: vgaarb: loaded Sep 10 06:54:08.993048 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 06:54:08.993061 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 06:54:08.995109 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 06:54:08.995127 kernel: pnp: PnP ACPI init Sep 10 06:54:08.995324 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 06:54:08.995348 kernel: pnp: PnP ACPI: found 5 devices Sep 10 06:54:08.995361 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 06:54:08.995374 kernel: NET: Registered PF_INET protocol family Sep 10 06:54:08.995395 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 06:54:08.995409 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 10 06:54:08.995422 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 06:54:08.995435 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 10 06:54:08.995448 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 10 06:54:08.995475 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 10 06:54:08.995489 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 10 06:54:08.995502 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 10 06:54:08.995515 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 06:54:08.995534 kernel: NET: Registered PF_XDP protocol family Sep 10 06:54:08.995706 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 10 06:54:08.995879 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 10 06:54:08.996046 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 10 06:54:08.996230 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 10 06:54:08.996397 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 10 06:54:08.996579 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 10 06:54:08.996745 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 10 06:54:08.996956 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 10 06:54:08.998218 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Sep 10 06:54:08.998388 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Sep 10 06:54:08.998571 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Sep 10 06:54:08.998735 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Sep 10 06:54:08.998900 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Sep 10 06:54:08.999064 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Sep 10 06:54:08.999246 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Sep 10 06:54:08.999429 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Sep 10 06:54:08.999616 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 10 06:54:08.999816 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 10 06:54:08.999981 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 10 06:54:09.001193 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 10 06:54:09.001369 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 10 06:54:09.001551 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 10 06:54:09.001718 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 10 06:54:09.001893 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 10 06:54:09.002059 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 10 06:54:09.002248 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 10 06:54:09.002414 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 10 06:54:09.002601 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 10 06:54:09.002766 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 10 06:54:09.002929 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 10 06:54:09.003670 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 10 06:54:09.003843 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 10 06:54:09.004008 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 10 06:54:09.004190 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 10 06:54:09.004367 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 10 06:54:09.004548 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 10 06:54:09.004713 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 10 06:54:09.004878 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 10 06:54:09.005047 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 10 06:54:09.005231 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 10 06:54:09.005396 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 10 06:54:09.005574 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 10 06:54:09.005742 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 10 06:54:09.005906 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 10 06:54:09.006100 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 10 06:54:09.006268 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 10 06:54:09.006436 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 10 06:54:09.006613 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 10 06:54:09.006778 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 10 06:54:09.006942 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 10 06:54:09.007123 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 06:54:09.007275 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 06:54:09.007424 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 06:54:09.007596 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 10 06:54:09.007746 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 06:54:09.007895 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 10 06:54:09.008087 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 10 06:54:09.008274 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 10 06:54:09.008432 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 10 06:54:09.008621 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 10 06:54:09.008803 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 10 06:54:09.008959 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 10 06:54:09.009137 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 10 06:54:09.009308 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 10 06:54:09.009478 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 10 06:54:09.009635 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 10 06:54:09.009816 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 10 06:54:09.009972 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 10 06:54:09.010151 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 10 06:54:09.010319 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 10 06:54:09.010491 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 10 06:54:09.010647 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 10 06:54:09.010813 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 10 06:54:09.010977 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 10 06:54:09.011153 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 10 06:54:09.011319 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 10 06:54:09.011521 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 10 06:54:09.011679 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 10 06:54:09.011852 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 10 06:54:09.012009 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 10 06:54:09.012197 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 10 06:54:09.012221 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 06:54:09.012235 kernel: PCI: CLS 0 bytes, default 64 Sep 10 06:54:09.012249 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 10 06:54:09.012263 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 10 06:54:09.012277 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 10 06:54:09.012291 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 10 06:54:09.012305 kernel: Initialise system trusted keyrings Sep 10 06:54:09.012326 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 10 06:54:09.012339 kernel: Key type asymmetric registered Sep 10 06:54:09.012352 kernel: Asymmetric key parser 'x509' registered Sep 10 06:54:09.012366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 06:54:09.012379 kernel: io scheduler mq-deadline registered Sep 10 06:54:09.012393 kernel: io scheduler kyber registered Sep 10 06:54:09.012410 kernel: io scheduler bfq registered Sep 10 06:54:09.012614 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 10 06:54:09.012783 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 10 06:54:09.012956 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.013146 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 10 06:54:09.013312 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 10 06:54:09.013492 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.013663 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 10 06:54:09.013826 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 10 06:54:09.013998 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.014189 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 10 06:54:09.014355 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 10 06:54:09.014534 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.014703 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 10 06:54:09.014867 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 10 06:54:09.015039 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.015227 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 10 06:54:09.015393 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 10 06:54:09.015573 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.015751 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 10 06:54:09.015916 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 10 06:54:09.016109 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.016280 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 10 06:54:09.016444 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 10 06:54:09.016628 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 10 06:54:09.016650 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 06:54:09.016666 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 06:54:09.016687 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 06:54:09.016700 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 06:54:09.016714 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 06:54:09.016727 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 06:54:09.016741 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 06:54:09.016754 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 06:54:09.016932 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 10 06:54:09.016955 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 06:54:09.017124 kernel: rtc_cmos 00:03: registered as rtc0 Sep 10 06:54:09.017296 kernel: rtc_cmos 00:03: setting system clock to 2025-09-10T06:54:08 UTC (1757487248) Sep 10 06:54:09.017452 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 10 06:54:09.017485 kernel: intel_pstate: CPU model not supported Sep 10 06:54:09.017499 kernel: NET: Registered PF_INET6 protocol family Sep 10 06:54:09.017512 kernel: Segment Routing with IPv6 Sep 10 06:54:09.017526 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 06:54:09.017539 kernel: NET: Registered PF_PACKET protocol family Sep 10 06:54:09.017553 kernel: Key type dns_resolver registered Sep 10 06:54:09.017573 kernel: IPI shorthand broadcast: enabled Sep 10 06:54:09.017587 kernel: sched_clock: Marking stable (3552004490, 230620385)->(3920790348, -138165473) Sep 10 06:54:09.017600 kernel: registered taskstats version 1 Sep 10 06:54:09.017614 kernel: Loading compiled-in X.509 certificates Sep 10 06:54:09.017627 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f6c45bc801b894d4dac30a723f1f683ea8f7e3ae' Sep 10 06:54:09.017640 kernel: Demotion targets for Node 0: null Sep 10 06:54:09.017654 kernel: Key type .fscrypt registered Sep 10 06:54:09.017667 kernel: Key type fscrypt-provisioning registered Sep 10 06:54:09.017680 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 06:54:09.017699 kernel: ima: Allocated hash algorithm: sha1 Sep 10 06:54:09.017712 kernel: ima: No architecture policies found Sep 10 06:54:09.017725 kernel: clk: Disabling unused clocks Sep 10 06:54:09.017739 kernel: Warning: unable to open an initial console. Sep 10 06:54:09.017753 kernel: Freeing unused kernel image (initmem) memory: 54068K Sep 10 06:54:09.017766 kernel: Write protecting the kernel read-only data: 24576k Sep 10 06:54:09.017779 kernel: Freeing unused kernel image (rodata/data gap) memory: 252K Sep 10 06:54:09.017793 kernel: Run /init as init process Sep 10 06:54:09.017806 kernel: with arguments: Sep 10 06:54:09.017824 kernel: /init Sep 10 06:54:09.017838 kernel: with environment: Sep 10 06:54:09.017851 kernel: HOME=/ Sep 10 06:54:09.017864 kernel: TERM=linux Sep 10 06:54:09.017877 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 06:54:09.017906 systemd[1]: Successfully made /usr/ read-only. Sep 10 06:54:09.017928 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 06:54:09.017952 systemd[1]: Detected virtualization kvm. Sep 10 06:54:09.017966 systemd[1]: Detected architecture x86-64. Sep 10 06:54:09.017980 systemd[1]: Running in initrd. Sep 10 06:54:09.017994 systemd[1]: No hostname configured, using default hostname. Sep 10 06:54:09.018008 systemd[1]: Hostname set to . Sep 10 06:54:09.018022 systemd[1]: Initializing machine ID from VM UUID. Sep 10 06:54:09.018037 systemd[1]: Queued start job for default target initrd.target. Sep 10 06:54:09.018051 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 06:54:09.018065 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 06:54:09.018106 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 06:54:09.018121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 06:54:09.018136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 06:54:09.018151 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 06:54:09.018167 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 06:54:09.018181 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 06:54:09.018200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 06:54:09.018215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 06:54:09.018229 systemd[1]: Reached target paths.target - Path Units. Sep 10 06:54:09.018243 systemd[1]: Reached target slices.target - Slice Units. Sep 10 06:54:09.018257 systemd[1]: Reached target swap.target - Swaps. Sep 10 06:54:09.018272 systemd[1]: Reached target timers.target - Timer Units. Sep 10 06:54:09.018286 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 06:54:09.018301 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 06:54:09.018315 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 06:54:09.018334 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 06:54:09.018349 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 06:54:09.018363 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 06:54:09.018378 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 06:54:09.018392 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 06:54:09.018406 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 06:54:09.018420 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 06:54:09.018435 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 06:54:09.018465 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 06:54:09.018482 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 06:54:09.018496 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 06:54:09.018510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 06:54:09.018524 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 06:54:09.018599 systemd-journald[230]: Collecting audit messages is disabled. Sep 10 06:54:09.018642 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 06:54:09.018658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 06:54:09.018672 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 06:54:09.018692 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 06:54:09.018708 systemd-journald[230]: Journal started Sep 10 06:54:09.018742 systemd-journald[230]: Runtime Journal (/run/log/journal/78dc5c1b89324ebb9ebad6f4119d1964) is 4.7M, max 38.2M, 33.4M free. Sep 10 06:54:08.992152 systemd-modules-load[231]: Inserted module 'overlay' Sep 10 06:54:09.078787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 06:54:09.078836 kernel: Bridge firewalling registered Sep 10 06:54:09.078856 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 06:54:09.037733 systemd-modules-load[231]: Inserted module 'br_netfilter' Sep 10 06:54:09.080640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 06:54:09.083596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 06:54:09.090358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 06:54:09.092988 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 06:54:09.097246 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 06:54:09.103736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 06:54:09.111949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 06:54:09.117231 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 06:54:09.128520 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 06:54:09.131131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 06:54:09.135377 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 06:54:09.142122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 06:54:09.144795 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 06:54:09.153972 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 06:54:09.187460 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cb34a525c000ff57e16870cd9f0af09c033a700c5f8ee35d58f46d8926fcf6e5 Sep 10 06:54:09.196922 systemd-resolved[266]: Positive Trust Anchors: Sep 10 06:54:09.196955 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 06:54:09.197000 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 06:54:09.201830 systemd-resolved[266]: Defaulting to hostname 'linux'. Sep 10 06:54:09.203826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 06:54:09.205837 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 06:54:09.309163 kernel: SCSI subsystem initialized Sep 10 06:54:09.320157 kernel: Loading iSCSI transport class v2.0-870. Sep 10 06:54:09.334134 kernel: iscsi: registered transport (tcp) Sep 10 06:54:09.360265 kernel: iscsi: registered transport (qla4xxx) Sep 10 06:54:09.360376 kernel: QLogic iSCSI HBA Driver Sep 10 06:54:09.386362 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 06:54:09.405126 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 06:54:09.406905 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 06:54:09.471725 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 06:54:09.475347 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 06:54:09.538173 kernel: raid6: sse2x4 gen() 7819 MB/s Sep 10 06:54:09.556119 kernel: raid6: sse2x2 gen() 5567 MB/s Sep 10 06:54:09.574744 kernel: raid6: sse2x1 gen() 5578 MB/s Sep 10 06:54:09.574867 kernel: raid6: using algorithm sse2x4 gen() 7819 MB/s Sep 10 06:54:09.593763 kernel: raid6: .... xor() 5096 MB/s, rmw enabled Sep 10 06:54:09.593883 kernel: raid6: using ssse3x2 recovery algorithm Sep 10 06:54:09.620129 kernel: xor: automatically using best checksumming function avx Sep 10 06:54:09.814117 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 06:54:09.823916 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 06:54:09.827345 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 06:54:09.859734 systemd-udevd[478]: Using default interface naming scheme 'v255'. Sep 10 06:54:09.869399 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 06:54:09.874312 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 06:54:09.905671 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Sep 10 06:54:09.941915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 06:54:09.946466 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 06:54:10.064420 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 06:54:10.069556 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 06:54:10.183343 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 10 06:54:10.211340 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 10 06:54:10.214107 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 06:54:10.241688 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 06:54:10.241772 kernel: GPT:17805311 != 125829119 Sep 10 06:54:10.241792 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 06:54:10.244259 kernel: GPT:17805311 != 125829119 Sep 10 06:54:10.244296 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 06:54:10.245824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 06:54:10.253375 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Sep 10 06:54:10.253428 kernel: AES CTR mode by8 optimization enabled Sep 10 06:54:10.287208 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 06:54:10.288634 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 06:54:10.295290 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 06:54:10.302117 kernel: libata version 3.00 loaded. Sep 10 06:54:10.302658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 06:54:10.305872 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 06:54:10.312117 kernel: ACPI: bus type USB registered Sep 10 06:54:10.317381 kernel: usbcore: registered new interface driver usbfs Sep 10 06:54:10.321096 kernel: usbcore: registered new interface driver hub Sep 10 06:54:10.329235 kernel: usbcore: registered new device driver usb Sep 10 06:54:10.351356 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 06:54:10.351684 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 06:54:10.367110 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 10 06:54:10.367399 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 10 06:54:10.367621 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 06:54:10.389120 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 06:54:10.477126 kernel: scsi host0: ahci Sep 10 06:54:10.477499 kernel: scsi host1: ahci Sep 10 06:54:10.477720 kernel: scsi host2: ahci Sep 10 06:54:10.477926 kernel: scsi host3: ahci Sep 10 06:54:10.478960 kernel: scsi host4: ahci Sep 10 06:54:10.479192 kernel: scsi host5: ahci Sep 10 06:54:10.479392 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Sep 10 06:54:10.479415 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Sep 10 06:54:10.479447 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Sep 10 06:54:10.479475 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Sep 10 06:54:10.479493 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Sep 10 06:54:10.479511 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Sep 10 06:54:10.486627 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 06:54:10.500008 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 06:54:10.512759 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 06:54:10.523043 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 06:54:10.523853 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 06:54:10.527188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 06:54:10.550084 disk-uuid[632]: Primary Header is updated. Sep 10 06:54:10.550084 disk-uuid[632]: Secondary Entries is updated. Sep 10 06:54:10.550084 disk-uuid[632]: Secondary Header is updated. Sep 10 06:54:10.557128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 06:54:10.709178 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 06:54:10.711513 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 06:54:10.711553 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 06:54:10.714089 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 06:54:10.715730 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 10 06:54:10.718094 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 06:54:10.724121 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 10 06:54:10.728376 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 10 06:54:10.728624 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 10 06:54:10.735146 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 10 06:54:10.735399 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 10 06:54:10.735623 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 10 06:54:10.737154 kernel: hub 1-0:1.0: USB hub found Sep 10 06:54:10.739434 kernel: hub 1-0:1.0: 4 ports detected Sep 10 06:54:10.740483 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 10 06:54:10.742132 kernel: hub 2-0:1.0: USB hub found Sep 10 06:54:10.744401 kernel: hub 2-0:1.0: 4 ports detected Sep 10 06:54:10.761103 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 06:54:10.763531 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 06:54:10.764351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 06:54:10.766158 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 06:54:10.769020 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 06:54:10.799544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 06:54:10.982160 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 10 06:54:11.123114 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 06:54:11.129687 kernel: usbcore: registered new interface driver usbhid Sep 10 06:54:11.129780 kernel: usbhid: USB HID core driver Sep 10 06:54:11.137369 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Sep 10 06:54:11.137460 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 10 06:54:11.571105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 06:54:11.572300 disk-uuid[633]: The operation has completed successfully. Sep 10 06:54:11.637729 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 06:54:11.637925 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 06:54:11.682760 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 06:54:11.699514 sh[659]: Success Sep 10 06:54:11.724394 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 06:54:11.724518 kernel: device-mapper: uevent: version 1.0.3 Sep 10 06:54:11.727695 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 06:54:11.742148 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Sep 10 06:54:11.797723 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 06:54:11.799842 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 06:54:11.813723 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 06:54:11.828110 kernel: BTRFS: device fsid d8201365-420d-4e6d-a9af-b12a81c8fc98 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (671) Sep 10 06:54:11.832362 kernel: BTRFS info (device dm-0): first mount of filesystem d8201365-420d-4e6d-a9af-b12a81c8fc98 Sep 10 06:54:11.832458 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 06:54:11.845001 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 06:54:11.845171 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 06:54:11.847784 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 06:54:11.849864 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 06:54:11.850740 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 06:54:11.852942 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 06:54:11.855288 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 06:54:11.890687 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (705) Sep 10 06:54:11.894694 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 06:54:11.894740 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 06:54:11.905172 kernel: BTRFS info (device vda6): turning on async discard Sep 10 06:54:11.905271 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 06:54:11.913221 kernel: BTRFS info (device vda6): last unmount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 06:54:11.914304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 06:54:11.917896 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 06:54:11.996347 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 06:54:12.005282 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 06:54:12.063435 systemd-networkd[840]: lo: Link UP Sep 10 06:54:12.064601 systemd-networkd[840]: lo: Gained carrier Sep 10 06:54:12.067926 systemd-networkd[840]: Enumeration completed Sep 10 06:54:12.068247 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 06:54:12.069572 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 06:54:12.069579 systemd-networkd[840]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 06:54:12.070486 systemd[1]: Reached target network.target - Network. Sep 10 06:54:12.072209 systemd-networkd[840]: eth0: Link UP Sep 10 06:54:12.072669 systemd-networkd[840]: eth0: Gained carrier Sep 10 06:54:12.072711 systemd-networkd[840]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 06:54:12.140855 ignition[759]: Ignition 2.22.0 Sep 10 06:54:12.140880 ignition[759]: Stage: fetch-offline Sep 10 06:54:12.140960 ignition[759]: no configs at "/usr/lib/ignition/base.d" Sep 10 06:54:12.142190 systemd-networkd[840]: eth0: DHCPv4 address 10.244.28.170/30, gateway 10.244.28.169 acquired from 10.244.28.169 Sep 10 06:54:12.140978 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 10 06:54:12.145606 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 06:54:12.141134 ignition[759]: parsed url from cmdline: "" Sep 10 06:54:12.141141 ignition[759]: no config URL provided Sep 10 06:54:12.141157 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 06:54:12.141172 ignition[759]: no config at "/usr/lib/ignition/user.ign" Sep 10 06:54:12.150290 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 10 06:54:12.141186 ignition[759]: failed to fetch config: resource requires networking Sep 10 06:54:12.141428 ignition[759]: Ignition finished successfully Sep 10 06:54:12.190187 ignition[850]: Ignition 2.22.0 Sep 10 06:54:12.190211 ignition[850]: Stage: fetch Sep 10 06:54:12.190418 ignition[850]: no configs at "/usr/lib/ignition/base.d" Sep 10 06:54:12.190439 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 10 06:54:12.190556 ignition[850]: parsed url from cmdline: "" Sep 10 06:54:12.190563 ignition[850]: no config URL provided Sep 10 06:54:12.190573 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 06:54:12.190589 ignition[850]: no config at "/usr/lib/ignition/user.ign" Sep 10 06:54:12.190759 ignition[850]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 10 06:54:12.191804 ignition[850]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 10 06:54:12.191859 ignition[850]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 10 06:54:12.205906 ignition[850]: GET result: OK Sep 10 06:54:12.206052 ignition[850]: parsing config with SHA512: 844e409d2b76a502dc6c8cefc680e43ad9cbb1927f9764364cbdd81dfa7804a5343afabd0f1a52c0d23c3be7f3517328e64333af648f6b896ad6c33603129979 Sep 10 06:54:12.216018 unknown[850]: fetched base config from "system" Sep 10 06:54:12.216036 unknown[850]: fetched base config from "system" Sep 10 06:54:12.216505 ignition[850]: fetch: fetch complete Sep 10 06:54:12.216046 unknown[850]: fetched user config from "openstack" Sep 10 06:54:12.216514 ignition[850]: fetch: fetch passed Sep 10 06:54:12.219118 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 10 06:54:12.216582 ignition[850]: Ignition finished successfully Sep 10 06:54:12.222247 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 06:54:12.265850 ignition[857]: Ignition 2.22.0 Sep 10 06:54:12.265875 ignition[857]: Stage: kargs Sep 10 06:54:12.266051 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 10 06:54:12.266095 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 10 06:54:12.267481 ignition[857]: kargs: kargs passed Sep 10 06:54:12.271026 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 06:54:12.267551 ignition[857]: Ignition finished successfully Sep 10 06:54:12.275242 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 06:54:12.327462 ignition[863]: Ignition 2.22.0 Sep 10 06:54:12.328259 ignition[863]: Stage: disks Sep 10 06:54:12.328465 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 10 06:54:12.328484 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 10 06:54:12.329463 ignition[863]: disks: disks passed Sep 10 06:54:12.332656 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 06:54:12.329533 ignition[863]: Ignition finished successfully Sep 10 06:54:12.334313 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 06:54:12.335672 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 06:54:12.337307 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 06:54:12.338919 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 06:54:12.340321 systemd[1]: Reached target basic.target - Basic System. Sep 10 06:54:12.343178 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 06:54:12.372601 systemd-fsck[872]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Sep 10 06:54:12.377183 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 06:54:12.379931 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 06:54:12.512176 kernel: EXT4-fs (vda9): mounted filesystem 8812db3a-0650-4908-b2d8-56c2f0883ee2 r/w with ordered data mode. Quota mode: none. Sep 10 06:54:12.513877 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 06:54:12.515351 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 06:54:12.518026 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 06:54:12.519945 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 06:54:12.521871 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 06:54:12.525023 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 10 06:54:12.527492 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 06:54:12.527544 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 06:54:12.541273 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 06:54:12.551125 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (880) Sep 10 06:54:12.551043 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 06:54:12.558649 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 06:54:12.559366 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 06:54:12.567668 kernel: BTRFS info (device vda6): turning on async discard Sep 10 06:54:12.567754 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 06:54:12.574849 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 06:54:12.620109 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:12.646364 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 06:54:12.652736 initrd-setup-root[916]: cut: /sysroot/etc/group: No such file or directory Sep 10 06:54:12.658790 initrd-setup-root[923]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 06:54:12.664495 initrd-setup-root[930]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 06:54:12.787407 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 06:54:12.789917 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 06:54:12.792312 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 06:54:12.820342 kernel: BTRFS info (device vda6): last unmount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 06:54:12.830249 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 06:54:12.839253 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 06:54:12.866499 ignition[999]: INFO : Ignition 2.22.0 Sep 10 06:54:12.866499 ignition[999]: INFO : Stage: mount Sep 10 06:54:12.868403 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 06:54:12.868403 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 10 06:54:12.868403 ignition[999]: INFO : mount: mount passed Sep 10 06:54:12.868403 ignition[999]: INFO : Ignition finished successfully Sep 10 06:54:12.869889 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 06:54:13.124454 systemd-networkd[840]: eth0: Gained IPv6LL Sep 10 06:54:13.655109 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:14.636321 systemd-networkd[840]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:72a:24:19ff:fef4:1caa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:72a:24:19ff:fef4:1caa/64 assigned by NDisc. Sep 10 06:54:14.636339 systemd-networkd[840]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 10 06:54:15.666123 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:19.675120 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:19.683475 coreos-metadata[882]: Sep 10 06:54:19.683 WARN failed to locate config-drive, using the metadata service API instead Sep 10 06:54:19.708514 coreos-metadata[882]: Sep 10 06:54:19.708 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 10 06:54:19.721049 coreos-metadata[882]: Sep 10 06:54:19.721 INFO Fetch successful Sep 10 06:54:19.722362 coreos-metadata[882]: Sep 10 06:54:19.722 INFO wrote hostname srv-fpwqg.gb1.brightbox.com to /sysroot/etc/hostname Sep 10 06:54:19.725728 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 10 06:54:19.725931 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 10 06:54:19.730912 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 06:54:19.768678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 06:54:19.806159 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Sep 10 06:54:19.811218 kernel: BTRFS info (device vda6): first mount of filesystem 44235b0d-89ef-44b4-a2ec-00ee2c04a5f6 Sep 10 06:54:19.811264 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 06:54:19.816443 kernel: BTRFS info (device vda6): turning on async discard Sep 10 06:54:19.816498 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 06:54:19.820597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 06:54:19.862694 ignition[1033]: INFO : Ignition 2.22.0 Sep 10 06:54:19.862694 ignition[1033]: INFO : Stage: files Sep 10 06:54:19.864612 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 06:54:19.864612 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 10 06:54:19.864612 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Sep 10 06:54:19.867528 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 06:54:19.867528 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 06:54:19.875443 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 06:54:19.875443 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 06:54:19.875443 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 06:54:19.873366 unknown[1033]: wrote ssh authorized keys file for user: core Sep 10 06:54:19.879391 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 10 06:54:19.879391 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 10 06:54:20.407759 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 06:54:23.385118 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 10 06:54:23.385118 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 06:54:23.389238 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 06:54:23.389238 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 06:54:23.389238 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 06:54:23.389238 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 06:54:23.389238 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 06:54:23.389238 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 06:54:23.389238 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 06:54:23.398477 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 06:54:23.398477 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 06:54:23.398477 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 06:54:23.398477 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 06:54:23.398477 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 06:54:23.398477 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 10 06:54:23.899744 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 06:54:25.756010 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 06:54:25.760134 ignition[1033]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 06:54:25.760134 ignition[1033]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 06:54:25.767344 ignition[1033]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 06:54:25.767344 ignition[1033]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 06:54:25.771359 ignition[1033]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 10 06:54:25.771359 ignition[1033]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 06:54:25.771359 ignition[1033]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 06:54:25.771359 ignition[1033]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 06:54:25.771359 ignition[1033]: INFO : files: files passed Sep 10 06:54:25.771359 ignition[1033]: INFO : Ignition finished successfully Sep 10 06:54:25.772333 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 06:54:25.777619 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 06:54:25.785177 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 06:54:25.805534 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 06:54:25.806683 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 06:54:25.818334 initrd-setup-root-after-ignition[1066]: grep: Sep 10 06:54:25.819998 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 06:54:25.819998 initrd-setup-root-after-ignition[1062]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 06:54:25.822335 initrd-setup-root-after-ignition[1066]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 06:54:25.821468 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 06:54:25.823726 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 06:54:25.827131 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 06:54:25.881184 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 06:54:25.881401 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 06:54:25.883258 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 06:54:25.884514 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 06:54:25.886100 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 06:54:25.887429 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 06:54:25.916888 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 06:54:25.919855 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 06:54:25.945829 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 06:54:25.947756 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 06:54:25.948715 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 06:54:25.950342 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 06:54:25.950534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 06:54:25.952443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 06:54:25.953483 systemd[1]: Stopped target basic.target - Basic System. Sep 10 06:54:25.955018 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 06:54:25.956353 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 06:54:25.957959 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 06:54:25.959454 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 06:54:25.961147 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 06:54:25.962799 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 06:54:25.964583 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 06:54:25.966001 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 06:54:25.967767 systemd[1]: Stopped target swap.target - Swaps. Sep 10 06:54:25.969201 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 06:54:25.969509 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 06:54:25.971159 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 06:54:25.972918 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 06:54:25.974623 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 06:54:25.974811 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 06:54:25.976439 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 06:54:25.976692 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 06:54:25.978729 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 06:54:25.979037 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 06:54:25.980757 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 06:54:25.980934 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 06:54:25.990373 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 06:54:25.991150 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 06:54:25.993268 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 06:54:25.998309 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 06:54:25.999832 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 06:54:26.000064 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 06:54:26.001031 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 06:54:26.003056 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 06:54:26.010978 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 06:54:26.013469 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 06:54:26.041115 ignition[1086]: INFO : Ignition 2.22.0 Sep 10 06:54:26.041115 ignition[1086]: INFO : Stage: umount Sep 10 06:54:26.041115 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 06:54:26.041115 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 10 06:54:26.041016 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 06:54:26.049914 ignition[1086]: INFO : umount: umount passed Sep 10 06:54:26.049914 ignition[1086]: INFO : Ignition finished successfully Sep 10 06:54:26.045690 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 06:54:26.046196 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 06:54:26.047565 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 06:54:26.047640 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 06:54:26.048360 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 06:54:26.048439 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 06:54:26.049157 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 10 06:54:26.049241 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 10 06:54:26.050618 systemd[1]: Stopped target network.target - Network. Sep 10 06:54:26.051914 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 06:54:26.052061 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 06:54:26.053563 systemd[1]: Stopped target paths.target - Path Units. Sep 10 06:54:26.054870 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 06:54:26.058190 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 06:54:26.059611 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 06:54:26.061136 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 06:54:26.062576 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 06:54:26.062654 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 06:54:26.064261 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 06:54:26.064318 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 06:54:26.065830 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 06:54:26.065919 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 06:54:26.067418 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 06:54:26.067487 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 06:54:26.069015 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 06:54:26.071099 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 06:54:26.072302 systemd-networkd[840]: eth0: DHCPv6 lease lost Sep 10 06:54:26.075433 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 06:54:26.075597 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 06:54:26.077021 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 06:54:26.078628 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 06:54:26.080441 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 06:54:26.080660 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 06:54:26.084206 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 06:54:26.084543 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 06:54:26.084729 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 06:54:26.090879 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 06:54:26.091978 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 06:54:26.093609 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 06:54:26.093670 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 06:54:26.096363 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 06:54:26.097700 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 06:54:26.097778 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 06:54:26.100528 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 06:54:26.100616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 06:54:26.103548 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 06:54:26.103651 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 06:54:26.105260 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 06:54:26.105339 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 06:54:26.107022 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 06:54:26.110766 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 06:54:26.110868 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 06:54:26.118266 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 06:54:26.118542 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 06:54:26.120899 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 06:54:26.120969 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 06:54:26.123530 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 06:54:26.123589 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 06:54:26.125849 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 06:54:26.125932 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 06:54:26.127668 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 06:54:26.127742 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 06:54:26.129038 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 06:54:26.129149 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 06:54:26.134823 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 06:54:26.137252 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 06:54:26.137392 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 06:54:26.139257 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 06:54:26.139343 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 06:54:26.141613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 06:54:26.141713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 06:54:26.146452 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 06:54:26.146550 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 06:54:26.146622 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 06:54:26.147329 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 06:54:26.151692 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 06:54:26.162971 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 06:54:26.163188 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 06:54:26.164914 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 06:54:26.167220 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 06:54:26.198281 systemd[1]: Switching root. Sep 10 06:54:26.252910 systemd-journald[230]: Journal stopped Sep 10 06:54:27.961589 systemd-journald[230]: Received SIGTERM from PID 1 (systemd). Sep 10 06:54:27.961846 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 06:54:27.961906 kernel: SELinux: policy capability open_perms=1 Sep 10 06:54:27.961971 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 06:54:27.962005 kernel: SELinux: policy capability always_check_network=0 Sep 10 06:54:27.962025 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 06:54:27.962056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 06:54:27.963122 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 06:54:27.963172 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 06:54:27.963205 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 06:54:27.963238 kernel: audit: type=1403 audit(1757487266.673:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 06:54:27.963273 systemd[1]: Successfully loaded SELinux policy in 79.303ms. Sep 10 06:54:27.963333 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.320ms. Sep 10 06:54:27.963362 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 06:54:27.963389 systemd[1]: Detected virtualization kvm. Sep 10 06:54:27.963415 systemd[1]: Detected architecture x86-64. Sep 10 06:54:27.963444 systemd[1]: Detected first boot. Sep 10 06:54:27.963470 systemd[1]: Hostname set to . Sep 10 06:54:27.963500 systemd[1]: Initializing machine ID from VM UUID. Sep 10 06:54:27.963521 zram_generator::config[1130]: No configuration found. Sep 10 06:54:27.963547 kernel: Guest personality initialized and is inactive Sep 10 06:54:27.963572 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 10 06:54:27.963600 kernel: Initialized host personality Sep 10 06:54:27.963619 kernel: NET: Registered PF_VSOCK protocol family Sep 10 06:54:27.963638 systemd[1]: Populated /etc with preset unit settings. Sep 10 06:54:27.963669 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 06:54:27.963697 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 06:54:27.963718 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 06:54:27.963744 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 06:54:27.963773 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 06:54:27.963795 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 06:54:27.963815 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 06:54:27.963834 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 06:54:27.963855 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 06:54:27.963887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 06:54:27.963910 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 06:54:27.963939 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 06:54:27.963965 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 06:54:27.963985 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 06:54:27.964005 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 06:54:27.964037 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 06:54:27.964059 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 06:54:27.964100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 06:54:27.964145 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 06:54:27.964170 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 06:54:27.964203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 06:54:27.964226 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 06:54:27.964246 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 06:54:27.964302 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 06:54:27.964330 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 06:54:27.964351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 06:54:27.964381 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 06:54:27.964410 systemd[1]: Reached target slices.target - Slice Units. Sep 10 06:54:27.964431 systemd[1]: Reached target swap.target - Swaps. Sep 10 06:54:27.964458 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 06:54:27.964486 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 06:54:27.964507 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 06:54:27.964527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 06:54:27.964556 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 06:54:27.964583 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 06:54:27.964610 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 06:54:27.964638 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 06:54:27.964659 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 06:54:27.964691 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 06:54:27.964716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:27.964748 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 06:54:27.964775 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 06:54:27.964796 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 06:54:27.964817 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 06:54:27.964837 systemd[1]: Reached target machines.target - Containers. Sep 10 06:54:27.964857 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 06:54:27.964877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 06:54:27.964910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 06:54:27.964931 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 06:54:27.964951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 06:54:27.964971 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 06:54:27.964991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 06:54:27.965011 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 06:54:27.965037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 06:54:27.966114 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 06:54:27.966170 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 06:54:27.966195 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 06:54:27.966224 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 06:54:27.966246 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 06:54:27.966267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 06:54:27.966294 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 06:54:27.966316 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 06:54:27.966336 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 06:54:27.966362 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 06:54:27.966395 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 06:54:27.966429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 06:54:27.966461 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 06:54:27.966489 systemd[1]: Stopped verity-setup.service. Sep 10 06:54:27.966511 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:27.966532 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 06:54:27.966557 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 06:54:27.966578 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 06:54:27.966604 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 06:54:27.966630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 06:54:27.966663 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 06:54:27.966697 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 06:54:27.966717 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 06:54:27.966737 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 06:54:27.966756 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 06:54:27.966794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 06:54:27.966816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 06:54:27.966842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 06:54:27.966869 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 06:54:27.966890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 06:54:27.966910 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 06:54:27.966987 systemd-journald[1227]: Collecting audit messages is disabled. Sep 10 06:54:27.967056 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 06:54:27.967099 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 06:54:27.970945 systemd-journald[1227]: Journal started Sep 10 06:54:27.971013 systemd-journald[1227]: Runtime Journal (/run/log/journal/78dc5c1b89324ebb9ebad6f4119d1964) is 4.7M, max 38.2M, 33.4M free. Sep 10 06:54:27.971098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 06:54:27.548161 systemd[1]: Queued start job for default target multi-user.target. Sep 10 06:54:27.573179 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 06:54:27.976215 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 06:54:27.574026 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 06:54:27.980402 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 06:54:27.981327 kernel: loop: module loaded Sep 10 06:54:27.988236 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 06:54:27.992768 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 06:54:27.992819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 06:54:27.999171 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 06:54:28.002116 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 06:54:28.011560 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 06:54:28.018107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 06:54:28.024104 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 06:54:28.024178 kernel: fuse: init (API version 7.41) Sep 10 06:54:28.039141 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 06:54:28.046168 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 06:54:28.051109 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 06:54:28.051467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 06:54:28.053032 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 06:54:28.055406 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 06:54:28.076677 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 06:54:28.077036 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 06:54:28.095860 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 06:54:28.100375 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 06:54:28.102106 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 06:54:28.110109 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 06:54:28.127341 systemd-journald[1227]: Time spent on flushing to /var/log/journal/78dc5c1b89324ebb9ebad6f4119d1964 is 115.839ms for 1161 entries. Sep 10 06:54:28.127341 systemd-journald[1227]: System Journal (/var/log/journal/78dc5c1b89324ebb9ebad6f4119d1964) is 8M, max 584.8M, 576.8M free. Sep 10 06:54:28.295296 systemd-journald[1227]: Received client request to flush runtime journal. Sep 10 06:54:28.295380 kernel: loop0: detected capacity change from 0 to 128016 Sep 10 06:54:28.295423 kernel: ACPI: bus type drm_connector registered Sep 10 06:54:28.295522 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 06:54:28.295637 kernel: loop1: detected capacity change from 0 to 110984 Sep 10 06:54:28.136474 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 06:54:28.143930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 06:54:28.144972 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 06:54:28.148740 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 06:54:28.218861 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 06:54:28.219444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 06:54:28.227435 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 06:54:28.237058 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 06:54:28.300047 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 06:54:28.307955 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 06:54:28.338323 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Sep 10 06:54:28.338351 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Sep 10 06:54:28.344100 kernel: loop2: detected capacity change from 0 to 8 Sep 10 06:54:28.347887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 06:54:28.371112 kernel: loop3: detected capacity change from 0 to 224512 Sep 10 06:54:28.417201 kernel: loop4: detected capacity change from 0 to 128016 Sep 10 06:54:28.448200 kernel: loop5: detected capacity change from 0 to 110984 Sep 10 06:54:28.471110 kernel: loop6: detected capacity change from 0 to 8 Sep 10 06:54:28.478102 kernel: loop7: detected capacity change from 0 to 224512 Sep 10 06:54:28.505036 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 10 06:54:28.505926 (sd-merge)[1290]: Merged extensions into '/usr'. Sep 10 06:54:28.516466 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 06:54:28.516502 systemd[1]: Reloading... Sep 10 06:54:28.639204 zram_generator::config[1317]: No configuration found. Sep 10 06:54:28.990921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 06:54:28.991844 systemd[1]: Reloading finished in 474 ms. Sep 10 06:54:29.026437 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 06:54:29.028344 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 06:54:29.046208 systemd[1]: Starting ensure-sysext.service... Sep 10 06:54:29.050487 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 06:54:29.082242 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Sep 10 06:54:29.082269 systemd[1]: Reloading... Sep 10 06:54:29.093189 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 06:54:29.124313 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 06:54:29.124378 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 06:54:29.124829 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 06:54:29.125344 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 06:54:29.128104 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 06:54:29.131416 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Sep 10 06:54:29.131527 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Sep 10 06:54:29.152747 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 06:54:29.152765 systemd-tmpfiles[1374]: Skipping /boot Sep 10 06:54:29.159133 zram_generator::config[1399]: No configuration found. Sep 10 06:54:29.197056 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 06:54:29.197290 systemd-tmpfiles[1374]: Skipping /boot Sep 10 06:54:29.488085 systemd[1]: Reloading finished in 405 ms. Sep 10 06:54:29.502002 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 06:54:29.503337 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 06:54:29.516851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 06:54:29.529039 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 06:54:29.534421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 06:54:29.544573 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 06:54:29.550185 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 06:54:29.554475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 06:54:29.558507 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 06:54:29.564185 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:29.564475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 06:54:29.567240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 06:54:29.579330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 06:54:29.585522 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 06:54:29.586407 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 06:54:29.586573 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 06:54:29.586719 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:29.592805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:29.593161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 06:54:29.593409 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 06:54:29.593548 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 06:54:29.597873 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 06:54:29.600144 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:29.604515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 06:54:29.604831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 06:54:29.614226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:29.614588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 06:54:29.624448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 06:54:29.627848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 06:54:29.628833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 06:54:29.628982 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 06:54:29.630261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 06:54:29.631802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 06:54:29.632191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 06:54:29.644560 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 06:54:29.645522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 06:54:29.652254 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 06:54:29.655530 systemd[1]: Finished ensure-sysext.service. Sep 10 06:54:29.662620 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 06:54:29.668357 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 06:54:29.680831 systemd-udevd[1466]: Using default interface naming scheme 'v255'. Sep 10 06:54:29.703012 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 06:54:29.709377 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 06:54:29.723347 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 06:54:29.725968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 06:54:29.728669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 06:54:29.729659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 06:54:29.732624 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 06:54:29.749872 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 06:54:29.763055 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 06:54:29.771136 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 06:54:29.773177 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 06:54:29.785400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 06:54:29.794449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 06:54:29.797454 augenrules[1513]: No rules Sep 10 06:54:29.797968 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 06:54:29.799171 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 06:54:29.951812 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 06:54:29.963046 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 06:54:29.987372 systemd-networkd[1510]: lo: Link UP Sep 10 06:54:29.987395 systemd-networkd[1510]: lo: Gained carrier Sep 10 06:54:29.991172 systemd-networkd[1510]: Enumeration completed Sep 10 06:54:29.991413 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 06:54:29.996406 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 06:54:30.002608 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 06:54:30.059920 systemd-resolved[1465]: Positive Trust Anchors: Sep 10 06:54:30.059940 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 06:54:30.059987 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 06:54:30.095322 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 06:54:30.112915 systemd-resolved[1465]: Using system hostname 'srv-fpwqg.gb1.brightbox.com'. Sep 10 06:54:30.116161 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 06:54:30.118316 systemd[1]: Reached target network.target - Network. Sep 10 06:54:30.118983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 06:54:30.120221 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 06:54:30.122279 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 06:54:30.123157 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 06:54:30.123927 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 10 06:54:30.125500 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 06:54:30.126798 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 06:54:30.129493 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 06:54:30.130282 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 06:54:30.130343 systemd[1]: Reached target paths.target - Path Units. Sep 10 06:54:30.131041 systemd[1]: Reached target timers.target - Timer Units. Sep 10 06:54:30.133140 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 06:54:30.137341 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 06:54:30.145946 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 06:54:30.147113 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 06:54:30.148012 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 06:54:30.156370 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 06:54:30.168040 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 06:54:30.171013 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 06:54:30.175013 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 06:54:30.176481 systemd[1]: Reached target basic.target - Basic System. Sep 10 06:54:30.177561 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 06:54:30.177699 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 06:54:30.180558 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 06:54:30.184451 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 10 06:54:30.187720 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 06:54:30.189977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 06:54:30.196375 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 06:54:30.203444 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 06:54:30.205203 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 06:54:30.209111 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 10 06:54:30.221829 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 06:54:30.232420 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 06:54:30.239678 jq[1552]: false Sep 10 06:54:30.240436 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 06:54:30.244616 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:30.249537 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 06:54:30.257479 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Sep 10 06:54:30.257426 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Sep 10 06:54:30.260973 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 06:54:30.263508 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 06:54:30.264684 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 06:54:30.269108 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Sep 10 06:54:30.269108 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 10 06:54:30.269108 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Sep 10 06:54:30.267371 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 06:54:30.265656 oslogin_cache_refresh[1554]: Failure getting users, quitting Sep 10 06:54:30.265741 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 10 06:54:30.265822 oslogin_cache_refresh[1554]: Refreshing group entry cache Sep 10 06:54:30.275377 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Sep 10 06:54:30.275377 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 10 06:54:30.271716 oslogin_cache_refresh[1554]: Failure getting groups, quitting Sep 10 06:54:30.271735 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 10 06:54:30.280333 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 06:54:30.287891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 06:54:30.289320 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 06:54:30.290183 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 06:54:30.290615 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 10 06:54:30.295357 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 10 06:54:30.315041 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 06:54:30.315479 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 06:54:30.321857 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 06:54:30.342415 extend-filesystems[1553]: Found /dev/vda6 Sep 10 06:54:30.374222 extend-filesystems[1553]: Found /dev/vda9 Sep 10 06:54:30.377138 jq[1563]: true Sep 10 06:54:30.384753 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 06:54:30.412236 extend-filesystems[1553]: Checking size of /dev/vda9 Sep 10 06:54:30.417233 tar[1569]: linux-amd64/LICENSE Sep 10 06:54:30.417233 tar[1569]: linux-amd64/helm Sep 10 06:54:30.392881 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 06:54:30.393394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 06:54:30.451362 update_engine[1561]: I20250910 06:54:30.438982 1561 main.cc:92] Flatcar Update Engine starting Sep 10 06:54:30.462026 jq[1590]: true Sep 10 06:54:30.476873 dbus-daemon[1550]: [system] SELinux support is enabled Sep 10 06:54:30.477298 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 06:54:30.485119 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 06:54:30.485185 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 06:54:30.491756 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 06:54:30.491801 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 06:54:30.498512 systemd-logind[1560]: New seat seat0. Sep 10 06:54:30.500180 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 06:54:30.536677 extend-filesystems[1553]: Resized partition /dev/vda9 Sep 10 06:54:30.536225 systemd[1]: Started update-engine.service - Update Engine. Sep 10 06:54:30.541624 update_engine[1561]: I20250910 06:54:30.536998 1561 update_check_scheduler.cc:74] Next update check in 7m11s Sep 10 06:54:30.550270 extend-filesystems[1605]: resize2fs 1.47.3 (8-Jul-2025) Sep 10 06:54:30.560781 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 10 06:54:30.562981 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 06:54:30.639511 systemd-networkd[1510]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 06:54:30.640436 systemd-networkd[1510]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 06:54:30.646694 systemd-networkd[1510]: eth0: Link UP Sep 10 06:54:30.647008 systemd-networkd[1510]: eth0: Gained carrier Sep 10 06:54:30.647035 systemd-networkd[1510]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 06:54:30.695392 dbus-daemon[1550]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1510 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 10 06:54:30.696759 systemd-networkd[1510]: eth0: DHCPv4 address 10.244.28.170/30, gateway 10.244.28.169 acquired from 10.244.28.169 Sep 10 06:54:30.698217 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Sep 10 06:54:30.784387 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 10 06:54:30.868715 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 06:54:30.887450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 06:54:30.896002 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Sep 10 06:54:30.897704 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 06:54:30.910682 systemd[1]: Starting sshkeys.service... Sep 10 06:54:30.911280 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 06:54:30.957753 locksmithd[1604]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 06:54:30.961483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 06:54:31.020145 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 10 06:54:31.028285 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 10 06:54:31.060185 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:31.079851 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 10 06:54:31.085894 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 10 06:54:31.090967 containerd[1582]: time="2025-09-10T06:54:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 06:54:31.091234 dbus-daemon[1550]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1626 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 10 06:54:31.100147 containerd[1582]: time="2025-09-10T06:54:31.099901210Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 06:54:31.106700 systemd[1]: Starting polkit.service - Authorization Manager... Sep 10 06:54:31.166412 containerd[1582]: time="2025-09-10T06:54:31.166204643Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="24.972µs" Sep 10 06:54:31.166412 containerd[1582]: time="2025-09-10T06:54:31.166256080Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 06:54:31.166412 containerd[1582]: time="2025-09-10T06:54:31.166289717Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 06:54:31.166942 containerd[1582]: time="2025-09-10T06:54:31.166645622Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 06:54:31.166942 containerd[1582]: time="2025-09-10T06:54:31.166679937Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 06:54:31.166942 containerd[1582]: time="2025-09-10T06:54:31.166726545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 06:54:31.166942 containerd[1582]: time="2025-09-10T06:54:31.166832337Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 06:54:31.166942 containerd[1582]: time="2025-09-10T06:54:31.166853477Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 06:54:31.178478 containerd[1582]: time="2025-09-10T06:54:31.178278016Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 06:54:31.178478 containerd[1582]: time="2025-09-10T06:54:31.178330692Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 06:54:31.178478 containerd[1582]: time="2025-09-10T06:54:31.178355268Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 06:54:31.178478 containerd[1582]: time="2025-09-10T06:54:31.178373318Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 06:54:31.178737 containerd[1582]: time="2025-09-10T06:54:31.178565499Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 06:54:31.179057 containerd[1582]: time="2025-09-10T06:54:31.179025728Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 06:54:31.179173 containerd[1582]: time="2025-09-10T06:54:31.179140902Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 06:54:31.179372 containerd[1582]: time="2025-09-10T06:54:31.179171419Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 06:54:31.179372 containerd[1582]: time="2025-09-10T06:54:31.179236820Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 06:54:31.180123 containerd[1582]: time="2025-09-10T06:54:31.179679087Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 06:54:31.180123 containerd[1582]: time="2025-09-10T06:54:31.179776219Z" level=info msg="metadata content store policy set" policy=shared Sep 10 06:54:31.188114 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.226980161Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227125946Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227161996Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227184175Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227207190Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227225460Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227262954Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227297808Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227318698Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227337622Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 06:54:31.227383 containerd[1582]: time="2025-09-10T06:54:31.227353672Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227390385Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227631860Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227681394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227708488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227739477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227758323Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227775352Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227792029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227815007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227835839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227852507Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.227874163Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.228017168Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 06:54:31.228922 containerd[1582]: time="2025-09-10T06:54:31.228048979Z" level=info msg="Start snapshots syncer" Sep 10 06:54:31.232721 containerd[1582]: time="2025-09-10T06:54:31.230230593Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 06:54:31.232721 containerd[1582]: time="2025-09-10T06:54:31.230617696Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.230686045Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.230920953Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231151750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231184930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231205328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231222266Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231251250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231270927Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231288446Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231329243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231350789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231367908Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231417266Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 06:54:31.233041 containerd[1582]: time="2025-09-10T06:54:31.231445035Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231463110Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231478657Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231491981Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231507468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231530511Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231564298Z" level=info msg="runtime interface created" Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231576118Z" level=info msg="created NRI interface" Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231589720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231608271Z" level=info msg="Connect containerd service" Sep 10 06:54:31.234163 containerd[1582]: time="2025-09-10T06:54:31.231657184Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 06:54:31.241119 containerd[1582]: time="2025-09-10T06:54:31.240013469Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 06:54:31.245110 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 10 06:54:31.284339 extend-filesystems[1605]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 06:54:31.284339 extend-filesystems[1605]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 10 06:54:31.284339 extend-filesystems[1605]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 10 06:54:31.283729 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 06:54:31.316492 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Sep 10 06:54:31.329745 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 06:54:31.330234 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 06:54:31.330534 kernel: ACPI: button: Power Button [PWRF] Sep 10 06:54:31.285174 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 06:54:31.414881 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 06:54:31.455507 polkitd[1641]: Started polkitd version 126 Sep 10 06:54:31.472980 containerd[1582]: time="2025-09-10T06:54:31.472885635Z" level=info msg="Start subscribing containerd event" Sep 10 06:54:31.474733 containerd[1582]: time="2025-09-10T06:54:31.474640328Z" level=info msg="Start recovering state" Sep 10 06:54:31.475034 containerd[1582]: time="2025-09-10T06:54:31.475009170Z" level=info msg="Start event monitor" Sep 10 06:54:31.475500 containerd[1582]: time="2025-09-10T06:54:31.475472556Z" level=info msg="Start cni network conf syncer for default" Sep 10 06:54:31.477201 containerd[1582]: time="2025-09-10T06:54:31.476307738Z" level=info msg="Start streaming server" Sep 10 06:54:31.477201 containerd[1582]: time="2025-09-10T06:54:31.476479399Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 06:54:31.477201 containerd[1582]: time="2025-09-10T06:54:31.476559228Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 06:54:31.478137 containerd[1582]: time="2025-09-10T06:54:31.478109299Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 06:54:31.478523 containerd[1582]: time="2025-09-10T06:54:31.478242179Z" level=info msg="runtime interface starting up..." Sep 10 06:54:31.478523 containerd[1582]: time="2025-09-10T06:54:31.478267929Z" level=info msg="starting plugins..." Sep 10 06:54:31.478523 containerd[1582]: time="2025-09-10T06:54:31.478309475Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 06:54:31.478823 containerd[1582]: time="2025-09-10T06:54:31.478799926Z" level=info msg="containerd successfully booted in 0.388952s" Sep 10 06:54:31.479253 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 06:54:31.489301 polkitd[1641]: Loading rules from directory /etc/polkit-1/rules.d Sep 10 06:54:31.489790 polkitd[1641]: Loading rules from directory /run/polkit-1/rules.d Sep 10 06:54:31.489870 polkitd[1641]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 10 06:54:31.494663 polkitd[1641]: Loading rules from directory /usr/local/share/polkit-1/rules.d Sep 10 06:54:31.494737 polkitd[1641]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Sep 10 06:54:31.494814 polkitd[1641]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 10 06:54:31.503690 polkitd[1641]: Finished loading, compiling and executing 2 rules Sep 10 06:54:31.504207 systemd[1]: Started polkit.service - Authorization Manager. Sep 10 06:54:31.509469 dbus-daemon[1550]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 10 06:54:31.511823 polkitd[1641]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 10 06:54:31.588691 systemd-hostnamed[1626]: Hostname set to (static) Sep 10 06:54:31.777346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 06:54:31.808215 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 06:54:31.912996 systemd-logind[1560]: Watching system buttons on /dev/input/event3 (Power Button) Sep 10 06:54:31.940294 systemd-networkd[1510]: eth0: Gained IPv6LL Sep 10 06:54:31.943523 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Sep 10 06:54:31.953351 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 06:54:31.959341 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 06:54:31.966421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:54:31.973318 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 06:54:32.148825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 06:54:32.255608 tar[1569]: linux-amd64/README.md Sep 10 06:54:32.348472 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 06:54:32.506703 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 06:54:32.540528 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 06:54:32.548063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 06:54:32.553589 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 06:54:32.556821 systemd[1]: Started sshd@0-10.244.28.170:22-139.178.89.65:54296.service - OpenSSH per-connection server daemon (139.178.89.65:54296). Sep 10 06:54:32.587737 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 06:54:32.588193 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 06:54:32.594870 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 06:54:32.633325 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 06:54:32.643068 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 06:54:32.648512 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 06:54:32.649671 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 06:54:32.886145 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Sep 10 06:54:32.888262 systemd-networkd[1510]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:72a:24:19ff:fef4:1caa/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:72a:24:19ff:fef4:1caa/64 assigned by NDisc. Sep 10 06:54:32.888273 systemd-networkd[1510]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 10 06:54:33.162409 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:33.162658 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:33.282701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:54:33.305786 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 06:54:33.491818 sshd[1710]: Accepted publickey for core from 139.178.89.65 port 54296 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:33.497796 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:33.513048 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 06:54:33.519408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 06:54:33.543919 systemd-logind[1560]: New session 1 of user core. Sep 10 06:54:33.559331 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 06:54:33.576657 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 06:54:33.592116 (systemd)[1736]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 06:54:33.600312 systemd-logind[1560]: New session c1 of user core. Sep 10 06:54:33.818341 systemd[1736]: Queued start job for default target default.target. Sep 10 06:54:33.833974 systemd[1736]: Created slice app.slice - User Application Slice. Sep 10 06:54:33.834021 systemd[1736]: Reached target paths.target - Paths. Sep 10 06:54:33.834186 systemd[1736]: Reached target timers.target - Timers. Sep 10 06:54:33.837263 systemd[1736]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 06:54:33.863936 systemd[1736]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 06:54:33.865445 systemd[1736]: Reached target sockets.target - Sockets. Sep 10 06:54:33.865536 systemd[1736]: Reached target basic.target - Basic System. Sep 10 06:54:33.865609 systemd[1736]: Reached target default.target - Main User Target. Sep 10 06:54:33.865674 systemd[1736]: Startup finished in 244ms. Sep 10 06:54:33.865869 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 06:54:33.874449 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 06:54:34.057127 kubelet[1728]: E0910 06:54:34.057007 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 06:54:34.060336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 06:54:34.060713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 06:54:34.061893 systemd[1]: kubelet.service: Consumed 1.117s CPU time, 267.5M memory peak. Sep 10 06:54:34.117088 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Sep 10 06:54:34.519841 systemd[1]: Started sshd@1-10.244.28.170:22-139.178.89.65:54304.service - OpenSSH per-connection server daemon (139.178.89.65:54304). Sep 10 06:54:35.181115 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:35.192189 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:35.439787 sshd[1748]: Accepted publickey for core from 139.178.89.65 port 54304 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:35.441400 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:35.450273 systemd-logind[1560]: New session 2 of user core. Sep 10 06:54:35.463562 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 06:54:36.056725 sshd[1753]: Connection closed by 139.178.89.65 port 54304 Sep 10 06:54:36.057605 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Sep 10 06:54:36.062831 systemd[1]: sshd@1-10.244.28.170:22-139.178.89.65:54304.service: Deactivated successfully. Sep 10 06:54:36.065213 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 06:54:36.066624 systemd-logind[1560]: Session 2 logged out. Waiting for processes to exit. Sep 10 06:54:36.068970 systemd-logind[1560]: Removed session 2. Sep 10 06:54:36.214215 systemd[1]: Started sshd@2-10.244.28.170:22-139.178.89.65:54320.service - OpenSSH per-connection server daemon (139.178.89.65:54320). Sep 10 06:54:37.137633 sshd[1759]: Accepted publickey for core from 139.178.89.65 port 54320 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:37.139518 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:37.146753 systemd-logind[1560]: New session 3 of user core. Sep 10 06:54:37.162672 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 06:54:37.736032 login[1718]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 10 06:54:37.747803 login[1719]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 10 06:54:37.749617 systemd-logind[1560]: New session 4 of user core. Sep 10 06:54:37.757455 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 06:54:37.761579 sshd[1762]: Connection closed by 139.178.89.65 port 54320 Sep 10 06:54:37.765722 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Sep 10 06:54:37.768093 systemd-logind[1560]: New session 5 of user core. Sep 10 06:54:37.776408 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 06:54:37.777167 systemd[1]: sshd@2-10.244.28.170:22-139.178.89.65:54320.service: Deactivated successfully. Sep 10 06:54:37.780706 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 06:54:37.784754 systemd-logind[1560]: Session 3 logged out. Waiting for processes to exit. Sep 10 06:54:37.791254 systemd-logind[1560]: Removed session 3. Sep 10 06:54:39.202123 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:39.218090 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Sep 10 06:54:39.225693 coreos-metadata[1549]: Sep 10 06:54:39.225 WARN failed to locate config-drive, using the metadata service API instead Sep 10 06:54:39.231178 coreos-metadata[1636]: Sep 10 06:54:39.231 WARN failed to locate config-drive, using the metadata service API instead Sep 10 06:54:39.264489 coreos-metadata[1636]: Sep 10 06:54:39.264 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 10 06:54:39.264747 coreos-metadata[1549]: Sep 10 06:54:39.264 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 10 06:54:39.271362 coreos-metadata[1549]: Sep 10 06:54:39.271 INFO Fetch failed with 404: resource not found Sep 10 06:54:39.271583 coreos-metadata[1549]: Sep 10 06:54:39.271 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 10 06:54:39.271965 coreos-metadata[1549]: Sep 10 06:54:39.271 INFO Fetch successful Sep 10 06:54:39.272147 coreos-metadata[1549]: Sep 10 06:54:39.272 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 10 06:54:39.284674 coreos-metadata[1549]: Sep 10 06:54:39.284 INFO Fetch successful Sep 10 06:54:39.285221 coreos-metadata[1549]: Sep 10 06:54:39.285 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 10 06:54:39.295563 coreos-metadata[1636]: Sep 10 06:54:39.295 INFO Fetch successful Sep 10 06:54:39.295949 coreos-metadata[1636]: Sep 10 06:54:39.295 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 10 06:54:39.299394 coreos-metadata[1549]: Sep 10 06:54:39.299 INFO Fetch successful Sep 10 06:54:39.299394 coreos-metadata[1549]: Sep 10 06:54:39.299 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 10 06:54:39.312789 coreos-metadata[1549]: Sep 10 06:54:39.312 INFO Fetch successful Sep 10 06:54:39.312789 coreos-metadata[1549]: Sep 10 06:54:39.312 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 10 06:54:39.318036 coreos-metadata[1636]: Sep 10 06:54:39.317 INFO Fetch successful Sep 10 06:54:39.321288 unknown[1636]: wrote ssh authorized keys file for user: core Sep 10 06:54:39.329283 coreos-metadata[1549]: Sep 10 06:54:39.329 INFO Fetch successful Sep 10 06:54:39.367889 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 10 06:54:39.369347 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 06:54:39.370695 update-ssh-keys[1798]: Updated "/home/core/.ssh/authorized_keys" Sep 10 06:54:39.370657 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 10 06:54:39.374150 systemd[1]: Finished sshkeys.service. Sep 10 06:54:39.377578 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 06:54:39.384350 systemd[1]: Startup finished in 3.663s (kernel) + 17.970s (initrd) + 12.787s (userspace) = 34.421s. Sep 10 06:54:44.129798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 06:54:44.132271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:54:44.345646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:54:44.355950 (kubelet)[1816]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 06:54:44.438306 kubelet[1816]: E0910 06:54:44.437892 1816 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 06:54:44.443573 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 06:54:44.443836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 06:54:44.444825 systemd[1]: kubelet.service: Consumed 234ms CPU time, 109.3M memory peak. Sep 10 06:54:47.934462 systemd[1]: Started sshd@3-10.244.28.170:22-139.178.89.65:60910.service - OpenSSH per-connection server daemon (139.178.89.65:60910). Sep 10 06:54:48.842415 sshd[1824]: Accepted publickey for core from 139.178.89.65 port 60910 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:48.844401 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:48.851366 systemd-logind[1560]: New session 6 of user core. Sep 10 06:54:48.864394 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 06:54:49.470182 sshd[1827]: Connection closed by 139.178.89.65 port 60910 Sep 10 06:54:49.471331 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Sep 10 06:54:49.477026 systemd[1]: sshd@3-10.244.28.170:22-139.178.89.65:60910.service: Deactivated successfully. Sep 10 06:54:49.479401 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 06:54:49.480667 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Sep 10 06:54:49.482684 systemd-logind[1560]: Removed session 6. Sep 10 06:54:49.635213 systemd[1]: Started sshd@4-10.244.28.170:22-139.178.89.65:48848.service - OpenSSH per-connection server daemon (139.178.89.65:48848). Sep 10 06:54:50.548322 sshd[1833]: Accepted publickey for core from 139.178.89.65 port 48848 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:50.550252 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:50.557648 systemd-logind[1560]: New session 7 of user core. Sep 10 06:54:50.572810 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 06:54:51.162133 sshd[1836]: Connection closed by 139.178.89.65 port 48848 Sep 10 06:54:51.162637 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Sep 10 06:54:51.168172 systemd[1]: sshd@4-10.244.28.170:22-139.178.89.65:48848.service: Deactivated successfully. Sep 10 06:54:51.171338 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 06:54:51.175456 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Sep 10 06:54:51.177859 systemd-logind[1560]: Removed session 7. Sep 10 06:54:51.319816 systemd[1]: Started sshd@5-10.244.28.170:22-139.178.89.65:48860.service - OpenSSH per-connection server daemon (139.178.89.65:48860). Sep 10 06:54:52.245690 sshd[1842]: Accepted publickey for core from 139.178.89.65 port 48860 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:52.246272 sshd-session[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:52.258228 systemd-logind[1560]: New session 8 of user core. Sep 10 06:54:52.264465 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 06:54:52.870003 sshd[1845]: Connection closed by 139.178.89.65 port 48860 Sep 10 06:54:52.870845 sshd-session[1842]: pam_unix(sshd:session): session closed for user core Sep 10 06:54:52.876346 systemd[1]: sshd@5-10.244.28.170:22-139.178.89.65:48860.service: Deactivated successfully. Sep 10 06:54:52.878868 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 06:54:52.880274 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Sep 10 06:54:52.882849 systemd-logind[1560]: Removed session 8. Sep 10 06:54:53.033657 systemd[1]: Started sshd@6-10.244.28.170:22-139.178.89.65:48876.service - OpenSSH per-connection server daemon (139.178.89.65:48876). Sep 10 06:54:53.954282 sshd[1851]: Accepted publickey for core from 139.178.89.65 port 48876 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:53.956006 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:53.964603 systemd-logind[1560]: New session 9 of user core. Sep 10 06:54:53.971427 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 06:54:54.445912 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 06:54:54.446485 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 06:54:54.448179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 06:54:54.453561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:54:54.466880 sudo[1855]: pam_unix(sudo:session): session closed for user root Sep 10 06:54:54.610126 sshd[1854]: Connection closed by 139.178.89.65 port 48876 Sep 10 06:54:54.611466 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Sep 10 06:54:54.619899 systemd[1]: sshd@6-10.244.28.170:22-139.178.89.65:48876.service: Deactivated successfully. Sep 10 06:54:54.623951 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 06:54:54.626529 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Sep 10 06:54:54.628372 systemd-logind[1560]: Removed session 9. Sep 10 06:54:54.675287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:54:54.692130 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 06:54:54.771356 systemd[1]: Started sshd@7-10.244.28.170:22-139.178.89.65:48890.service - OpenSSH per-connection server daemon (139.178.89.65:48890). Sep 10 06:54:54.781633 kubelet[1868]: E0910 06:54:54.781574 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 06:54:54.785735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 06:54:54.785971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 06:54:54.786435 systemd[1]: kubelet.service: Consumed 247ms CPU time, 107.8M memory peak. Sep 10 06:54:55.678690 sshd[1876]: Accepted publickey for core from 139.178.89.65 port 48890 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:55.680597 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:55.690368 systemd-logind[1560]: New session 10 of user core. Sep 10 06:54:55.700736 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 06:54:56.157004 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 06:54:56.157488 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 06:54:56.165388 sudo[1882]: pam_unix(sudo:session): session closed for user root Sep 10 06:54:56.174213 sudo[1881]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 06:54:56.174653 sudo[1881]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 06:54:56.190705 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 06:54:56.250741 augenrules[1904]: No rules Sep 10 06:54:56.252128 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 06:54:56.252453 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 06:54:56.254447 sudo[1881]: pam_unix(sudo:session): session closed for user root Sep 10 06:54:56.397437 sshd[1880]: Connection closed by 139.178.89.65 port 48890 Sep 10 06:54:56.397904 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Sep 10 06:54:56.404041 systemd[1]: sshd@7-10.244.28.170:22-139.178.89.65:48890.service: Deactivated successfully. Sep 10 06:54:56.407742 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 06:54:56.410061 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Sep 10 06:54:56.412689 systemd-logind[1560]: Removed session 10. Sep 10 06:54:56.552892 systemd[1]: Started sshd@8-10.244.28.170:22-139.178.89.65:48902.service - OpenSSH per-connection server daemon (139.178.89.65:48902). Sep 10 06:54:57.460534 sshd[1913]: Accepted publickey for core from 139.178.89.65 port 48902 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:54:57.462276 sshd-session[1913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:54:57.470242 systemd-logind[1560]: New session 11 of user core. Sep 10 06:54:57.476358 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 06:54:57.938510 sudo[1917]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 06:54:57.939548 sudo[1917]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 06:54:58.488331 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 06:54:58.500734 (dockerd)[1935]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 06:54:58.893892 dockerd[1935]: time="2025-09-10T06:54:58.893735897Z" level=info msg="Starting up" Sep 10 06:54:58.898291 dockerd[1935]: time="2025-09-10T06:54:58.898228263Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 06:54:58.917247 dockerd[1935]: time="2025-09-10T06:54:58.917150589Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 06:54:58.972232 dockerd[1935]: time="2025-09-10T06:54:58.972148004Z" level=info msg="Loading containers: start." Sep 10 06:54:58.988115 kernel: Initializing XFRM netlink socket Sep 10 06:54:59.307886 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Sep 10 06:54:59.386486 systemd-networkd[1510]: docker0: Link UP Sep 10 06:54:59.410128 dockerd[1935]: time="2025-09-10T06:54:59.409726057Z" level=info msg="Loading containers: done." Sep 10 06:54:59.434945 dockerd[1935]: time="2025-09-10T06:54:59.434842877Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 06:54:59.435236 dockerd[1935]: time="2025-09-10T06:54:59.434973006Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 06:54:59.435236 dockerd[1935]: time="2025-09-10T06:54:59.435176840Z" level=info msg="Initializing buildkit" Sep 10 06:54:59.471614 dockerd[1935]: time="2025-09-10T06:54:59.471483106Z" level=info msg="Completed buildkit initialization" Sep 10 06:54:59.485164 dockerd[1935]: time="2025-09-10T06:54:59.483900838Z" level=info msg="Daemon has completed initialization" Sep 10 06:54:59.485164 dockerd[1935]: time="2025-09-10T06:54:59.484064540Z" level=info msg="API listen on /run/docker.sock" Sep 10 06:54:59.485833 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 06:55:00.334278 systemd-resolved[1465]: Clock change detected. Flushing caches. Sep 10 06:55:00.334733 systemd-timesyncd[1484]: Contacted time server [2a01:7e00::f03c:91ff:fe89:410f]:123 (2.flatcar.pool.ntp.org). Sep 10 06:55:00.334841 systemd-timesyncd[1484]: Initial clock synchronization to Wed 2025-09-10 06:55:00.333592 UTC. Sep 10 06:55:01.387217 containerd[1582]: time="2025-09-10T06:55:01.387040801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 06:55:02.250599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1566197056.mount: Deactivated successfully. Sep 10 06:55:03.548061 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 10 06:55:04.629675 containerd[1582]: time="2025-09-10T06:55:04.629594495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:04.632128 containerd[1582]: time="2025-09-10T06:55:04.632084452Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 10 06:55:04.633399 containerd[1582]: time="2025-09-10T06:55:04.633336365Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:04.636998 containerd[1582]: time="2025-09-10T06:55:04.636933502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:04.639273 containerd[1582]: time="2025-09-10T06:55:04.638364961Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.251189318s" Sep 10 06:55:04.639273 containerd[1582]: time="2025-09-10T06:55:04.638423086Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 10 06:55:04.640221 containerd[1582]: time="2025-09-10T06:55:04.639977063Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 06:55:05.509236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 06:55:05.515512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:55:05.760813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:55:05.776035 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 06:55:05.925090 kubelet[2219]: E0910 06:55:05.924998 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 06:55:05.928462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 06:55:05.928870 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 06:55:05.929620 systemd[1]: kubelet.service: Consumed 267ms CPU time, 108.6M memory peak. Sep 10 06:55:07.079237 containerd[1582]: time="2025-09-10T06:55:07.079128760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:07.081792 containerd[1582]: time="2025-09-10T06:55:07.081740729Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 10 06:55:07.083111 containerd[1582]: time="2025-09-10T06:55:07.083048123Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:07.087785 containerd[1582]: time="2025-09-10T06:55:07.087702863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:07.090269 containerd[1582]: time="2025-09-10T06:55:07.089036933Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.449017801s" Sep 10 06:55:07.090269 containerd[1582]: time="2025-09-10T06:55:07.089084597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 10 06:55:07.091006 containerd[1582]: time="2025-09-10T06:55:07.090706736Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 06:55:08.943357 containerd[1582]: time="2025-09-10T06:55:08.943281054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:08.945947 containerd[1582]: time="2025-09-10T06:55:08.945875208Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 10 06:55:08.946822 containerd[1582]: time="2025-09-10T06:55:08.946764683Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:08.952224 containerd[1582]: time="2025-09-10T06:55:08.951269290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:08.953004 containerd[1582]: time="2025-09-10T06:55:08.952963711Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.862217779s" Sep 10 06:55:08.953127 containerd[1582]: time="2025-09-10T06:55:08.953101798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 10 06:55:08.954207 containerd[1582]: time="2025-09-10T06:55:08.954159796Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 06:55:11.116077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318957436.mount: Deactivated successfully. Sep 10 06:55:12.176753 containerd[1582]: time="2025-09-10T06:55:12.176649186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:12.178317 containerd[1582]: time="2025-09-10T06:55:12.177991062Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 10 06:55:12.179265 containerd[1582]: time="2025-09-10T06:55:12.179225095Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:12.181916 containerd[1582]: time="2025-09-10T06:55:12.181874581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:12.182924 containerd[1582]: time="2025-09-10T06:55:12.182885036Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.228657237s" Sep 10 06:55:12.183060 containerd[1582]: time="2025-09-10T06:55:12.183032494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 10 06:55:12.183711 containerd[1582]: time="2025-09-10T06:55:12.183672871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 06:55:12.818126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3356234541.mount: Deactivated successfully. Sep 10 06:55:14.612595 containerd[1582]: time="2025-09-10T06:55:14.611701259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:14.615069 containerd[1582]: time="2025-09-10T06:55:14.614694001Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 10 06:55:14.616182 containerd[1582]: time="2025-09-10T06:55:14.616128280Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:14.621098 containerd[1582]: time="2025-09-10T06:55:14.621017758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:14.622623 containerd[1582]: time="2025-09-10T06:55:14.622579232Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.438862998s" Sep 10 06:55:14.622717 containerd[1582]: time="2025-09-10T06:55:14.622627998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 06:55:14.623595 containerd[1582]: time="2025-09-10T06:55:14.623388880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 06:55:15.292584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2919088545.mount: Deactivated successfully. Sep 10 06:55:15.319325 containerd[1582]: time="2025-09-10T06:55:15.319246360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 06:55:15.322148 containerd[1582]: time="2025-09-10T06:55:15.322111269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 10 06:55:15.326834 containerd[1582]: time="2025-09-10T06:55:15.325533186Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 06:55:15.330224 containerd[1582]: time="2025-09-10T06:55:15.330077370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 06:55:15.331582 containerd[1582]: time="2025-09-10T06:55:15.331006781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 707.574586ms" Sep 10 06:55:15.331582 containerd[1582]: time="2025-09-10T06:55:15.331051931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 06:55:15.331699 containerd[1582]: time="2025-09-10T06:55:15.331678901Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 06:55:16.008951 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 10 06:55:16.013767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:55:16.075285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870375172.mount: Deactivated successfully. Sep 10 06:55:16.337905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:55:16.356325 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 06:55:16.464836 kubelet[2312]: E0910 06:55:16.464736 2312 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 06:55:16.470223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 06:55:16.470516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 06:55:16.472551 systemd[1]: kubelet.service: Consumed 262ms CPU time, 110.2M memory peak. Sep 10 06:55:16.603545 update_engine[1561]: I20250910 06:55:16.603013 1561 update_attempter.cc:509] Updating boot flags... Sep 10 06:55:19.441654 systemd[1]: Started sshd@9-10.244.28.170:22-196.251.118.184:60914.service - OpenSSH per-connection server daemon (196.251.118.184:60914). Sep 10 06:55:19.516530 sshd[2385]: Connection closed by 196.251.118.184 port 60914 Sep 10 06:55:19.517772 systemd[1]: sshd@9-10.244.28.170:22-196.251.118.184:60914.service: Deactivated successfully. Sep 10 06:55:20.503479 containerd[1582]: time="2025-09-10T06:55:20.503329717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:20.505268 containerd[1582]: time="2025-09-10T06:55:20.505212530Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 10 06:55:20.507812 containerd[1582]: time="2025-09-10T06:55:20.507746512Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:20.530258 containerd[1582]: time="2025-09-10T06:55:20.530114756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:20.531937 containerd[1582]: time="2025-09-10T06:55:20.531759145Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.200046157s" Sep 10 06:55:20.531937 containerd[1582]: time="2025-09-10T06:55:20.531805018Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 10 06:55:26.138923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:55:26.139804 systemd[1]: kubelet.service: Consumed 262ms CPU time, 110.2M memory peak. Sep 10 06:55:26.142947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:55:26.187806 systemd[1]: Reload requested from client PID 2418 ('systemctl') (unit session-11.scope)... Sep 10 06:55:26.187855 systemd[1]: Reloading... Sep 10 06:55:26.373298 zram_generator::config[2459]: No configuration found. Sep 10 06:55:26.735587 systemd[1]: Reloading finished in 546 ms. Sep 10 06:55:26.827433 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:55:26.832539 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 06:55:26.832970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:55:26.833063 systemd[1]: kubelet.service: Consumed 157ms CPU time, 97.8M memory peak. Sep 10 06:55:26.839916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:55:27.021528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:55:27.033737 (kubelet)[2532]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 06:55:27.149298 kubelet[2532]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 06:55:27.151208 kubelet[2532]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 06:55:27.151208 kubelet[2532]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 06:55:27.151208 kubelet[2532]: I0910 06:55:27.149993 2532 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 06:55:27.995670 kubelet[2532]: I0910 06:55:27.995613 2532 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 06:55:27.995670 kubelet[2532]: I0910 06:55:27.995662 2532 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 06:55:27.996109 kubelet[2532]: I0910 06:55:27.996084 2532 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 06:55:28.038393 kubelet[2532]: I0910 06:55:28.037408 2532 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 06:55:28.038393 kubelet[2532]: E0910 06:55:28.038293 2532 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.28.170:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:28.054388 kubelet[2532]: I0910 06:55:28.054348 2532 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 06:55:28.064003 kubelet[2532]: I0910 06:55:28.063973 2532 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 06:55:28.069751 kubelet[2532]: I0910 06:55:28.069685 2532 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 06:55:28.070167 kubelet[2532]: I0910 06:55:28.069879 2532 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-fpwqg.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 06:55:28.072304 kubelet[2532]: I0910 06:55:28.072277 2532 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 06:55:28.072465 kubelet[2532]: I0910 06:55:28.072446 2532 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 06:55:28.074000 kubelet[2532]: I0910 06:55:28.073827 2532 state_mem.go:36] "Initialized new in-memory state store" Sep 10 06:55:28.079931 kubelet[2532]: I0910 06:55:28.079408 2532 kubelet.go:446] "Attempting to sync node with API server" Sep 10 06:55:28.079931 kubelet[2532]: I0910 06:55:28.079471 2532 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 06:55:28.079931 kubelet[2532]: I0910 06:55:28.079524 2532 kubelet.go:352] "Adding apiserver pod source" Sep 10 06:55:28.079931 kubelet[2532]: I0910 06:55:28.079560 2532 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 06:55:28.082493 kubelet[2532]: W0910 06:55:28.082432 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpwqg.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:28.082713 kubelet[2532]: E0910 06:55:28.082680 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpwqg.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:28.084671 kubelet[2532]: W0910 06:55:28.084600 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:28.084749 kubelet[2532]: E0910 06:55:28.084691 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:28.086142 kubelet[2532]: I0910 06:55:28.085284 2532 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 06:55:28.089208 kubelet[2532]: I0910 06:55:28.088673 2532 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 06:55:28.089461 kubelet[2532]: W0910 06:55:28.089434 2532 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 06:55:28.092105 kubelet[2532]: I0910 06:55:28.092074 2532 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 06:55:28.092239 kubelet[2532]: I0910 06:55:28.092137 2532 server.go:1287] "Started kubelet" Sep 10 06:55:28.104385 kubelet[2532]: I0910 06:55:28.103946 2532 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 06:55:28.108144 kubelet[2532]: E0910 06:55:28.104877 2532 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.28.170:6443/api/v1/namespaces/default/events\": dial tcp 10.244.28.170:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-fpwqg.gb1.brightbox.com.1863d969d02e1c84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-fpwqg.gb1.brightbox.com,UID:srv-fpwqg.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-fpwqg.gb1.brightbox.com,},FirstTimestamp:2025-09-10 06:55:28.092101764 +0000 UTC m=+1.053343850,LastTimestamp:2025-09-10 06:55:28.092101764 +0000 UTC m=+1.053343850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-fpwqg.gb1.brightbox.com,}" Sep 10 06:55:28.112923 kubelet[2532]: I0910 06:55:28.112835 2532 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 06:55:28.115771 kubelet[2532]: I0910 06:55:28.115053 2532 server.go:479] "Adding debug handlers to kubelet server" Sep 10 06:55:28.116646 kubelet[2532]: I0910 06:55:28.116574 2532 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 06:55:28.118421 kubelet[2532]: I0910 06:55:28.118395 2532 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 06:55:28.119531 kubelet[2532]: I0910 06:55:28.117317 2532 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 06:55:28.119657 kubelet[2532]: E0910 06:55:28.117629 2532 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" Sep 10 06:55:28.120002 kubelet[2532]: I0910 06:55:28.119976 2532 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 06:55:28.123975 kubelet[2532]: I0910 06:55:28.117295 2532 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 06:55:28.124567 kubelet[2532]: I0910 06:55:28.124545 2532 reconciler.go:26] "Reconciler: start to sync state" Sep 10 06:55:28.127034 kubelet[2532]: I0910 06:55:28.127007 2532 factory.go:221] Registration of the systemd container factory successfully Sep 10 06:55:28.127272 kubelet[2532]: I0910 06:55:28.127244 2532 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 06:55:28.127897 kubelet[2532]: E0910 06:55:28.127861 2532 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpwqg.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.28.170:6443: connect: connection refused" interval="200ms" Sep 10 06:55:28.129765 kubelet[2532]: I0910 06:55:28.129741 2532 factory.go:221] Registration of the containerd container factory successfully Sep 10 06:55:28.138180 kubelet[2532]: W0910 06:55:28.138114 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:28.138416 kubelet[2532]: E0910 06:55:28.138374 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:28.138725 kubelet[2532]: E0910 06:55:28.138701 2532 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 06:55:28.142017 kubelet[2532]: I0910 06:55:28.141854 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 06:55:28.143629 kubelet[2532]: I0910 06:55:28.143605 2532 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 06:55:28.143783 kubelet[2532]: I0910 06:55:28.143762 2532 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 06:55:28.143911 kubelet[2532]: I0910 06:55:28.143890 2532 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 06:55:28.144005 kubelet[2532]: I0910 06:55:28.143990 2532 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 06:55:28.144179 kubelet[2532]: E0910 06:55:28.144147 2532 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 06:55:28.156780 kubelet[2532]: W0910 06:55:28.156134 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:28.156780 kubelet[2532]: E0910 06:55:28.156705 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:28.176333 kubelet[2532]: I0910 06:55:28.176290 2532 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 06:55:28.176333 kubelet[2532]: I0910 06:55:28.176319 2532 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 06:55:28.176615 kubelet[2532]: I0910 06:55:28.176351 2532 state_mem.go:36] "Initialized new in-memory state store" Sep 10 06:55:28.178434 kubelet[2532]: I0910 06:55:28.178400 2532 policy_none.go:49] "None policy: Start" Sep 10 06:55:28.178509 kubelet[2532]: I0910 06:55:28.178439 2532 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 06:55:28.178509 kubelet[2532]: I0910 06:55:28.178466 2532 state_mem.go:35] "Initializing new in-memory state store" Sep 10 06:55:28.194551 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 06:55:28.210230 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 06:55:28.215317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 06:55:28.219865 kubelet[2532]: E0910 06:55:28.219824 2532 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" Sep 10 06:55:28.227222 kubelet[2532]: I0910 06:55:28.226869 2532 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 06:55:28.227349 kubelet[2532]: I0910 06:55:28.227328 2532 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 06:55:28.227556 kubelet[2532]: I0910 06:55:28.227472 2532 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 06:55:28.227989 kubelet[2532]: I0910 06:55:28.227968 2532 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 06:55:28.230058 kubelet[2532]: E0910 06:55:28.229869 2532 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 06:55:28.230058 kubelet[2532]: E0910 06:55:28.230015 2532 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-fpwqg.gb1.brightbox.com\" not found" Sep 10 06:55:28.262151 systemd[1]: Created slice kubepods-burstable-pod29ce65e5e178b698fa1c35bbdccdead3.slice - libcontainer container kubepods-burstable-pod29ce65e5e178b698fa1c35bbdccdead3.slice. Sep 10 06:55:28.274547 kubelet[2532]: E0910 06:55:28.274447 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.280623 systemd[1]: Created slice kubepods-burstable-pod1bd20e13b33de583ef97dc69b669a1aa.slice - libcontainer container kubepods-burstable-pod1bd20e13b33de583ef97dc69b669a1aa.slice. Sep 10 06:55:28.285158 kubelet[2532]: E0910 06:55:28.283990 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.287139 systemd[1]: Created slice kubepods-burstable-pod5bb75687e78a4f58fd15aa1f738efa43.slice - libcontainer container kubepods-burstable-pod5bb75687e78a4f58fd15aa1f738efa43.slice. Sep 10 06:55:28.290362 kubelet[2532]: E0910 06:55:28.290327 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328237 kubelet[2532]: I0910 06:55:28.327570 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29ce65e5e178b698fa1c35bbdccdead3-kubeconfig\") pod \"kube-scheduler-srv-fpwqg.gb1.brightbox.com\" (UID: \"29ce65e5e178b698fa1c35bbdccdead3\") " pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328237 kubelet[2532]: I0910 06:55:28.327635 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-ca-certs\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328237 kubelet[2532]: I0910 06:55:28.327672 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328237 kubelet[2532]: I0910 06:55:28.327702 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-kubeconfig\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328237 kubelet[2532]: I0910 06:55:28.327730 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bd20e13b33de583ef97dc69b669a1aa-ca-certs\") pod \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" (UID: \"1bd20e13b33de583ef97dc69b669a1aa\") " pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328612 kubelet[2532]: I0910 06:55:28.327755 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bd20e13b33de583ef97dc69b669a1aa-k8s-certs\") pod \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" (UID: \"1bd20e13b33de583ef97dc69b669a1aa\") " pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328612 kubelet[2532]: I0910 06:55:28.327780 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bd20e13b33de583ef97dc69b669a1aa-usr-share-ca-certificates\") pod \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" (UID: \"1bd20e13b33de583ef97dc69b669a1aa\") " pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328612 kubelet[2532]: I0910 06:55:28.327806 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-flexvolume-dir\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.328612 kubelet[2532]: I0910 06:55:28.327832 2532 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-k8s-certs\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.329123 kubelet[2532]: E0910 06:55:28.329082 2532 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpwqg.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.28.170:6443: connect: connection refused" interval="400ms" Sep 10 06:55:28.330146 kubelet[2532]: I0910 06:55:28.330109 2532 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.330699 kubelet[2532]: E0910 06:55:28.330665 2532 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.28.170:6443/api/v1/nodes\": dial tcp 10.244.28.170:6443: connect: connection refused" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.533929 kubelet[2532]: I0910 06:55:28.533344 2532 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.534565 kubelet[2532]: E0910 06:55:28.534461 2532 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.28.170:6443/api/v1/nodes\": dial tcp 10.244.28.170:6443: connect: connection refused" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.577275 containerd[1582]: time="2025-09-10T06:55:28.577163894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-fpwqg.gb1.brightbox.com,Uid:29ce65e5e178b698fa1c35bbdccdead3,Namespace:kube-system,Attempt:0,}" Sep 10 06:55:28.596128 containerd[1582]: time="2025-09-10T06:55:28.595850938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-fpwqg.gb1.brightbox.com,Uid:5bb75687e78a4f58fd15aa1f738efa43,Namespace:kube-system,Attempt:0,}" Sep 10 06:55:28.596128 containerd[1582]: time="2025-09-10T06:55:28.596051250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-fpwqg.gb1.brightbox.com,Uid:1bd20e13b33de583ef97dc69b669a1aa,Namespace:kube-system,Attempt:0,}" Sep 10 06:55:28.732365 kubelet[2532]: E0910 06:55:28.731508 2532 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpwqg.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.28.170:6443: connect: connection refused" interval="800ms" Sep 10 06:55:28.766916 containerd[1582]: time="2025-09-10T06:55:28.766855853Z" level=info msg="connecting to shim 0a706b2f7f765ebd12bb034fcc778b3dcf252958f22ae8dfa10a4d647be98c7f" address="unix:///run/containerd/s/faf5242c19348d2984d3954f2091227425ebe1bb07ecf710974ad5c4bf893326" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:55:28.768094 containerd[1582]: time="2025-09-10T06:55:28.768059841Z" level=info msg="connecting to shim 06e250771df6f28c7fcdd25db6e6a7958d49f75c819c244d7fa8b061f57a2f09" address="unix:///run/containerd/s/6761b23457ab2657d57f00f09df3932950a752fdc9bc59310668b918e7082c21" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:55:28.774447 containerd[1582]: time="2025-09-10T06:55:28.774389843Z" level=info msg="connecting to shim aad0a204556367b8411c7e5c797ecf9f0d2203ae42b50296f90cc120ea7fdfe4" address="unix:///run/containerd/s/cc5d58aa3cafe83e563f443ddc3e0d6a275a2453988b1b5c325f8a04e4cd8349" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:55:28.914450 systemd[1]: Started cri-containerd-06e250771df6f28c7fcdd25db6e6a7958d49f75c819c244d7fa8b061f57a2f09.scope - libcontainer container 06e250771df6f28c7fcdd25db6e6a7958d49f75c819c244d7fa8b061f57a2f09. Sep 10 06:55:28.916303 systemd[1]: Started cri-containerd-0a706b2f7f765ebd12bb034fcc778b3dcf252958f22ae8dfa10a4d647be98c7f.scope - libcontainer container 0a706b2f7f765ebd12bb034fcc778b3dcf252958f22ae8dfa10a4d647be98c7f. Sep 10 06:55:28.917984 systemd[1]: Started cri-containerd-aad0a204556367b8411c7e5c797ecf9f0d2203ae42b50296f90cc120ea7fdfe4.scope - libcontainer container aad0a204556367b8411c7e5c797ecf9f0d2203ae42b50296f90cc120ea7fdfe4. Sep 10 06:55:28.939907 kubelet[2532]: I0910 06:55:28.939874 2532 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:28.943104 kubelet[2532]: E0910 06:55:28.943027 2532 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.28.170:6443/api/v1/nodes\": dial tcp 10.244.28.170:6443: connect: connection refused" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:29.047697 kubelet[2532]: W0910 06:55:29.047451 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:29.047697 kubelet[2532]: E0910 06:55:29.047619 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.28.170:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:29.058244 containerd[1582]: time="2025-09-10T06:55:29.058167467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-fpwqg.gb1.brightbox.com,Uid:5bb75687e78a4f58fd15aa1f738efa43,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad0a204556367b8411c7e5c797ecf9f0d2203ae42b50296f90cc120ea7fdfe4\"" Sep 10 06:55:29.063245 containerd[1582]: time="2025-09-10T06:55:29.063156485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-fpwqg.gb1.brightbox.com,Uid:1bd20e13b33de583ef97dc69b669a1aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a706b2f7f765ebd12bb034fcc778b3dcf252958f22ae8dfa10a4d647be98c7f\"" Sep 10 06:55:29.063785 containerd[1582]: time="2025-09-10T06:55:29.063264652Z" level=info msg="CreateContainer within sandbox \"aad0a204556367b8411c7e5c797ecf9f0d2203ae42b50296f90cc120ea7fdfe4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 06:55:29.072867 containerd[1582]: time="2025-09-10T06:55:29.072750832Z" level=info msg="CreateContainer within sandbox \"0a706b2f7f765ebd12bb034fcc778b3dcf252958f22ae8dfa10a4d647be98c7f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 06:55:29.103134 containerd[1582]: time="2025-09-10T06:55:29.103081487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-fpwqg.gb1.brightbox.com,Uid:29ce65e5e178b698fa1c35bbdccdead3,Namespace:kube-system,Attempt:0,} returns sandbox id \"06e250771df6f28c7fcdd25db6e6a7958d49f75c819c244d7fa8b061f57a2f09\"" Sep 10 06:55:29.107993 containerd[1582]: time="2025-09-10T06:55:29.107946343Z" level=info msg="Container 0f14b1f4dec62eeb71afaf294bf5f998b2d011b14a65c15c47e7abed69da2b02: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:55:29.109223 containerd[1582]: time="2025-09-10T06:55:29.109140523Z" level=info msg="CreateContainer within sandbox \"06e250771df6f28c7fcdd25db6e6a7958d49f75c819c244d7fa8b061f57a2f09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 06:55:29.112099 containerd[1582]: time="2025-09-10T06:55:29.112068290Z" level=info msg="Container 5e5a62fc482fc3539a2e1404ff1b024ecb218424015eacfbc378ba3af767253e: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:55:29.122366 containerd[1582]: time="2025-09-10T06:55:29.122288936Z" level=info msg="Container a6348a6874b08dd3e1f28fcf0cc698f55df532944c77e00ee1cf2541c4a22635: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:55:29.128944 containerd[1582]: time="2025-09-10T06:55:29.128897406Z" level=info msg="CreateContainer within sandbox \"aad0a204556367b8411c7e5c797ecf9f0d2203ae42b50296f90cc120ea7fdfe4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5e5a62fc482fc3539a2e1404ff1b024ecb218424015eacfbc378ba3af767253e\"" Sep 10 06:55:29.131658 containerd[1582]: time="2025-09-10T06:55:29.131600898Z" level=info msg="StartContainer for \"5e5a62fc482fc3539a2e1404ff1b024ecb218424015eacfbc378ba3af767253e\"" Sep 10 06:55:29.132329 containerd[1582]: time="2025-09-10T06:55:29.132298802Z" level=info msg="CreateContainer within sandbox \"06e250771df6f28c7fcdd25db6e6a7958d49f75c819c244d7fa8b061f57a2f09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a6348a6874b08dd3e1f28fcf0cc698f55df532944c77e00ee1cf2541c4a22635\"" Sep 10 06:55:29.133977 containerd[1582]: time="2025-09-10T06:55:29.133922386Z" level=info msg="StartContainer for \"a6348a6874b08dd3e1f28fcf0cc698f55df532944c77e00ee1cf2541c4a22635\"" Sep 10 06:55:29.135395 containerd[1582]: time="2025-09-10T06:55:29.135293646Z" level=info msg="connecting to shim 5e5a62fc482fc3539a2e1404ff1b024ecb218424015eacfbc378ba3af767253e" address="unix:///run/containerd/s/cc5d58aa3cafe83e563f443ddc3e0d6a275a2453988b1b5c325f8a04e4cd8349" protocol=ttrpc version=3 Sep 10 06:55:29.136228 containerd[1582]: time="2025-09-10T06:55:29.136162430Z" level=info msg="connecting to shim a6348a6874b08dd3e1f28fcf0cc698f55df532944c77e00ee1cf2541c4a22635" address="unix:///run/containerd/s/6761b23457ab2657d57f00f09df3932950a752fdc9bc59310668b918e7082c21" protocol=ttrpc version=3 Sep 10 06:55:29.137759 kubelet[2532]: W0910 06:55:29.137606 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:29.137976 kubelet[2532]: E0910 06:55:29.137879 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.28.170:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:29.138590 containerd[1582]: time="2025-09-10T06:55:29.138557410Z" level=info msg="CreateContainer within sandbox \"0a706b2f7f765ebd12bb034fcc778b3dcf252958f22ae8dfa10a4d647be98c7f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f14b1f4dec62eeb71afaf294bf5f998b2d011b14a65c15c47e7abed69da2b02\"" Sep 10 06:55:29.139460 containerd[1582]: time="2025-09-10T06:55:29.139427784Z" level=info msg="StartContainer for \"0f14b1f4dec62eeb71afaf294bf5f998b2d011b14a65c15c47e7abed69da2b02\"" Sep 10 06:55:29.140840 containerd[1582]: time="2025-09-10T06:55:29.140755170Z" level=info msg="connecting to shim 0f14b1f4dec62eeb71afaf294bf5f998b2d011b14a65c15c47e7abed69da2b02" address="unix:///run/containerd/s/faf5242c19348d2984d3954f2091227425ebe1bb07ecf710974ad5c4bf893326" protocol=ttrpc version=3 Sep 10 06:55:29.174500 systemd[1]: Started cri-containerd-0f14b1f4dec62eeb71afaf294bf5f998b2d011b14a65c15c47e7abed69da2b02.scope - libcontainer container 0f14b1f4dec62eeb71afaf294bf5f998b2d011b14a65c15c47e7abed69da2b02. Sep 10 06:55:29.197743 systemd[1]: Started cri-containerd-5e5a62fc482fc3539a2e1404ff1b024ecb218424015eacfbc378ba3af767253e.scope - libcontainer container 5e5a62fc482fc3539a2e1404ff1b024ecb218424015eacfbc378ba3af767253e. Sep 10 06:55:29.214465 systemd[1]: Started cri-containerd-a6348a6874b08dd3e1f28fcf0cc698f55df532944c77e00ee1cf2541c4a22635.scope - libcontainer container a6348a6874b08dd3e1f28fcf0cc698f55df532944c77e00ee1cf2541c4a22635. Sep 10 06:55:29.264169 kubelet[2532]: W0910 06:55:29.264075 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:29.265622 kubelet[2532]: E0910 06:55:29.265539 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.28.170:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:29.298629 kubelet[2532]: W0910 06:55:29.297838 2532 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpwqg.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.28.170:6443: connect: connection refused Sep 10 06:55:29.298629 kubelet[2532]: E0910 06:55:29.297923 2532 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.28.170:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-fpwqg.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.28.170:6443: connect: connection refused" logger="UnhandledError" Sep 10 06:55:29.354034 containerd[1582]: time="2025-09-10T06:55:29.353982170Z" level=info msg="StartContainer for \"a6348a6874b08dd3e1f28fcf0cc698f55df532944c77e00ee1cf2541c4a22635\" returns successfully" Sep 10 06:55:29.355359 containerd[1582]: time="2025-09-10T06:55:29.355303070Z" level=info msg="StartContainer for \"0f14b1f4dec62eeb71afaf294bf5f998b2d011b14a65c15c47e7abed69da2b02\" returns successfully" Sep 10 06:55:29.375788 containerd[1582]: time="2025-09-10T06:55:29.375737652Z" level=info msg="StartContainer for \"5e5a62fc482fc3539a2e1404ff1b024ecb218424015eacfbc378ba3af767253e\" returns successfully" Sep 10 06:55:29.532341 kubelet[2532]: E0910 06:55:29.532268 2532 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.28.170:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-fpwqg.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.28.170:6443: connect: connection refused" interval="1.6s" Sep 10 06:55:29.748633 kubelet[2532]: I0910 06:55:29.748267 2532 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:29.750231 kubelet[2532]: E0910 06:55:29.749568 2532 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.28.170:6443/api/v1/nodes\": dial tcp 10.244.28.170:6443: connect: connection refused" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:30.191870 kubelet[2532]: E0910 06:55:30.191544 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:30.197682 kubelet[2532]: E0910 06:55:30.197653 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:30.201387 kubelet[2532]: E0910 06:55:30.201362 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.206568 kubelet[2532]: E0910 06:55:31.205907 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.206568 kubelet[2532]: E0910 06:55:31.206401 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.208041 kubelet[2532]: E0910 06:55:31.207828 2532 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.354276 kubelet[2532]: I0910 06:55:31.354235 2532 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.841308 kubelet[2532]: E0910 06:55:31.841174 2532 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-fpwqg.gb1.brightbox.com\" not found" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.870211 kubelet[2532]: I0910 06:55:31.869671 2532 kubelet_node_status.go:78] "Successfully registered node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.870603 kubelet[2532]: E0910 06:55:31.870387 2532 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-fpwqg.gb1.brightbox.com\": node \"srv-fpwqg.gb1.brightbox.com\" not found" Sep 10 06:55:31.919251 kubelet[2532]: I0910 06:55:31.918744 2532 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.993955 kubelet[2532]: E0910 06:55:31.993873 2532 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-fpwqg.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.993955 kubelet[2532]: I0910 06:55:31.993927 2532 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.996162 kubelet[2532]: E0910 06:55:31.995948 2532 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.996162 kubelet[2532]: I0910 06:55:31.995978 2532 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:31.998666 kubelet[2532]: E0910 06:55:31.998629 2532 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:32.086001 kubelet[2532]: I0910 06:55:32.085941 2532 apiserver.go:52] "Watching apiserver" Sep 10 06:55:32.120481 kubelet[2532]: I0910 06:55:32.120317 2532 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 06:55:32.204803 kubelet[2532]: I0910 06:55:32.204311 2532 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:32.204803 kubelet[2532]: I0910 06:55:32.204571 2532 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:32.205500 kubelet[2532]: I0910 06:55:32.205471 2532 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:32.208014 kubelet[2532]: E0910 06:55:32.207971 2532 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:32.210023 kubelet[2532]: E0910 06:55:32.209981 2532 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:32.211106 kubelet[2532]: E0910 06:55:32.210827 2532 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-fpwqg.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:33.207547 kubelet[2532]: I0910 06:55:33.207305 2532 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:33.235243 kubelet[2532]: W0910 06:55:33.235072 2532 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 10 06:55:34.557935 systemd[1]: Reload requested from client PID 2804 ('systemctl') (unit session-11.scope)... Sep 10 06:55:34.557968 systemd[1]: Reloading... Sep 10 06:55:34.704258 zram_generator::config[2855]: No configuration found. Sep 10 06:55:35.059761 systemd[1]: Reloading finished in 501 ms. Sep 10 06:55:35.112409 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:55:35.124971 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 06:55:35.125400 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:55:35.125493 systemd[1]: kubelet.service: Consumed 1.599s CPU time, 128.4M memory peak. Sep 10 06:55:35.129688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 06:55:35.440981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 06:55:35.455913 (kubelet)[2913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 06:55:35.545141 kubelet[2913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 06:55:35.546883 kubelet[2913]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 06:55:35.546883 kubelet[2913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 06:55:35.546883 kubelet[2913]: I0910 06:55:35.546084 2913 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 06:55:35.562108 kubelet[2913]: I0910 06:55:35.562055 2913 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 06:55:35.562443 kubelet[2913]: I0910 06:55:35.562424 2913 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 06:55:35.562924 kubelet[2913]: I0910 06:55:35.562903 2913 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 06:55:35.566344 kubelet[2913]: I0910 06:55:35.566318 2913 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 06:55:35.569647 kubelet[2913]: I0910 06:55:35.569622 2913 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 06:55:35.579021 kubelet[2913]: I0910 06:55:35.578979 2913 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 06:55:35.592428 kubelet[2913]: I0910 06:55:35.592362 2913 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 06:55:35.594961 kubelet[2913]: I0910 06:55:35.594275 2913 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 06:55:35.594961 kubelet[2913]: I0910 06:55:35.594340 2913 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-fpwqg.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 06:55:35.594961 kubelet[2913]: I0910 06:55:35.594740 2913 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 06:55:35.594961 kubelet[2913]: I0910 06:55:35.594763 2913 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 06:55:35.595381 kubelet[2913]: I0910 06:55:35.594898 2913 state_mem.go:36] "Initialized new in-memory state store" Sep 10 06:55:35.597945 kubelet[2913]: I0910 06:55:35.597919 2913 kubelet.go:446] "Attempting to sync node with API server" Sep 10 06:55:35.598101 kubelet[2913]: I0910 06:55:35.598083 2913 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 06:55:35.598273 kubelet[2913]: I0910 06:55:35.598255 2913 kubelet.go:352] "Adding apiserver pod source" Sep 10 06:55:35.598375 kubelet[2913]: I0910 06:55:35.598356 2913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 06:55:35.624098 kubelet[2913]: I0910 06:55:35.622618 2913 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 06:55:35.624098 kubelet[2913]: I0910 06:55:35.623276 2913 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 06:55:35.624098 kubelet[2913]: I0910 06:55:35.623968 2913 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 06:55:35.624098 kubelet[2913]: I0910 06:55:35.624022 2913 server.go:1287] "Started kubelet" Sep 10 06:55:35.648121 kubelet[2913]: I0910 06:55:35.648079 2913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 06:55:35.665178 kubelet[2913]: I0910 06:55:35.649087 2913 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 06:55:35.668127 kubelet[2913]: I0910 06:55:35.668081 2913 server.go:479] "Adding debug handlers to kubelet server" Sep 10 06:55:35.670407 kubelet[2913]: E0910 06:55:35.670351 2913 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 06:55:35.674695 kubelet[2913]: I0910 06:55:35.649686 2913 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 06:55:35.675317 kubelet[2913]: I0910 06:55:35.675183 2913 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 06:55:35.675812 kubelet[2913]: I0910 06:55:35.649174 2913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 06:55:35.677633 kubelet[2913]: I0910 06:55:35.677587 2913 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 06:55:35.677813 kubelet[2913]: I0910 06:55:35.677634 2913 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 06:55:35.678329 kubelet[2913]: I0910 06:55:35.675478 2913 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 06:55:35.679220 kubelet[2913]: I0910 06:55:35.677997 2913 reconciler.go:26] "Reconciler: start to sync state" Sep 10 06:55:35.693987 kubelet[2913]: I0910 06:55:35.692079 2913 factory.go:221] Registration of the containerd container factory successfully Sep 10 06:55:35.693987 kubelet[2913]: I0910 06:55:35.692116 2913 factory.go:221] Registration of the systemd container factory successfully Sep 10 06:55:35.724551 kubelet[2913]: I0910 06:55:35.724427 2913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 06:55:35.730107 kubelet[2913]: I0910 06:55:35.729648 2913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 06:55:35.730107 kubelet[2913]: I0910 06:55:35.729688 2913 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 06:55:35.730107 kubelet[2913]: I0910 06:55:35.729719 2913 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 06:55:35.730107 kubelet[2913]: I0910 06:55:35.729730 2913 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 06:55:35.730107 kubelet[2913]: E0910 06:55:35.729813 2913 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 06:55:35.830042 kubelet[2913]: E0910 06:55:35.829990 2913 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 06:55:35.834587 kubelet[2913]: I0910 06:55:35.834421 2913 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835034 2913 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835070 2913 state_mem.go:36] "Initialized new in-memory state store" Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835394 2913 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835414 2913 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835454 2913 policy_none.go:49] "None policy: Start" Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835480 2913 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835504 2913 state_mem.go:35] "Initializing new in-memory state store" Sep 10 06:55:35.836054 kubelet[2913]: I0910 06:55:35.835683 2913 state_mem.go:75] "Updated machine memory state" Sep 10 06:55:35.848495 kubelet[2913]: I0910 06:55:35.848428 2913 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 06:55:35.849916 kubelet[2913]: I0910 06:55:35.849897 2913 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 06:55:35.850711 kubelet[2913]: I0910 06:55:35.850645 2913 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 06:55:35.856614 kubelet[2913]: I0910 06:55:35.856476 2913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 06:55:35.870482 kubelet[2913]: E0910 06:55:35.870423 2913 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 06:55:35.999416 kubelet[2913]: I0910 06:55:35.998096 2913 kubelet_node_status.go:75] "Attempting to register node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.026943 kubelet[2913]: I0910 06:55:36.026847 2913 kubelet_node_status.go:124] "Node was previously registered" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.028070 kubelet[2913]: I0910 06:55:36.027319 2913 kubelet_node_status.go:78] "Successfully registered node" node="srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.034521 kubelet[2913]: I0910 06:55:36.033012 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.041733 kubelet[2913]: I0910 06:55:36.039296 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.041733 kubelet[2913]: I0910 06:55:36.039788 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.061553 kubelet[2913]: W0910 06:55:36.061495 2913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 10 06:55:36.063582 kubelet[2913]: W0910 06:55:36.063440 2913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 10 06:55:36.063821 kubelet[2913]: E0910 06:55:36.063787 2913 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.071153 kubelet[2913]: W0910 06:55:36.071108 2913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 10 06:55:36.083263 kubelet[2913]: I0910 06:55:36.082513 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29ce65e5e178b698fa1c35bbdccdead3-kubeconfig\") pod \"kube-scheduler-srv-fpwqg.gb1.brightbox.com\" (UID: \"29ce65e5e178b698fa1c35bbdccdead3\") " pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083263 kubelet[2913]: I0910 06:55:36.082566 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1bd20e13b33de583ef97dc69b669a1aa-usr-share-ca-certificates\") pod \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" (UID: \"1bd20e13b33de583ef97dc69b669a1aa\") " pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083263 kubelet[2913]: I0910 06:55:36.082602 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-ca-certs\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083263 kubelet[2913]: I0910 06:55:36.082632 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-k8s-certs\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083263 kubelet[2913]: I0910 06:55:36.082664 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-kubeconfig\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083946 kubelet[2913]: I0910 06:55:36.082694 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083946 kubelet[2913]: I0910 06:55:36.082722 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1bd20e13b33de583ef97dc69b669a1aa-ca-certs\") pod \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" (UID: \"1bd20e13b33de583ef97dc69b669a1aa\") " pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083946 kubelet[2913]: I0910 06:55:36.082757 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1bd20e13b33de583ef97dc69b669a1aa-k8s-certs\") pod \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" (UID: \"1bd20e13b33de583ef97dc69b669a1aa\") " pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.083946 kubelet[2913]: I0910 06:55:36.082784 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5bb75687e78a4f58fd15aa1f738efa43-flexvolume-dir\") pod \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" (UID: \"5bb75687e78a4f58fd15aa1f738efa43\") " pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.609795 kubelet[2913]: I0910 06:55:36.607288 2913 apiserver.go:52] "Watching apiserver" Sep 10 06:55:36.681550 kubelet[2913]: I0910 06:55:36.679536 2913 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 06:55:36.681550 kubelet[2913]: I0910 06:55:36.679914 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" podStartSLOduration=3.679850208 podStartE2EDuration="3.679850208s" podCreationTimestamp="2025-09-10 06:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 06:55:36.679827856 +0000 UTC m=+1.213881270" watchObservedRunningTime="2025-09-10 06:55:36.679850208 +0000 UTC m=+1.213903606" Sep 10 06:55:36.713929 kubelet[2913]: I0910 06:55:36.713707 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-fpwqg.gb1.brightbox.com" podStartSLOduration=0.713679748 podStartE2EDuration="713.679748ms" podCreationTimestamp="2025-09-10 06:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 06:55:36.691330629 +0000 UTC m=+1.225384048" watchObservedRunningTime="2025-09-10 06:55:36.713679748 +0000 UTC m=+1.247733168" Sep 10 06:55:36.730177 kubelet[2913]: I0910 06:55:36.730084 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" podStartSLOduration=0.730059105 podStartE2EDuration="730.059105ms" podCreationTimestamp="2025-09-10 06:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 06:55:36.713157389 +0000 UTC m=+1.247210793" watchObservedRunningTime="2025-09-10 06:55:36.730059105 +0000 UTC m=+1.264112512" Sep 10 06:55:36.785599 kubelet[2913]: I0910 06:55:36.785202 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.786281 kubelet[2913]: I0910 06:55:36.786220 2913 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.797715 kubelet[2913]: W0910 06:55:36.797639 2913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 10 06:55:36.797715 kubelet[2913]: W0910 06:55:36.797717 2913 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 10 06:55:36.797976 kubelet[2913]: E0910 06:55:36.797780 2913 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-fpwqg.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:36.798716 kubelet[2913]: E0910 06:55:36.798055 2913 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-fpwqg.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-fpwqg.gb1.brightbox.com" Sep 10 06:55:40.527219 kubelet[2913]: I0910 06:55:40.527126 2913 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 06:55:40.528636 containerd[1582]: time="2025-09-10T06:55:40.528500367Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 06:55:40.529903 kubelet[2913]: I0910 06:55:40.529863 2913 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 06:55:41.443411 systemd[1]: Created slice kubepods-besteffort-podab49d0d3_3dd1_4e7c_a19f_174787e0905d.slice - libcontainer container kubepods-besteffort-podab49d0d3_3dd1_4e7c_a19f_174787e0905d.slice. Sep 10 06:55:41.519699 kubelet[2913]: I0910 06:55:41.519593 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab49d0d3-3dd1-4e7c-a19f-174787e0905d-lib-modules\") pod \"kube-proxy-5l8tk\" (UID: \"ab49d0d3-3dd1-4e7c-a19f-174787e0905d\") " pod="kube-system/kube-proxy-5l8tk" Sep 10 06:55:41.519699 kubelet[2913]: I0910 06:55:41.519665 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ab49d0d3-3dd1-4e7c-a19f-174787e0905d-kube-proxy\") pod \"kube-proxy-5l8tk\" (UID: \"ab49d0d3-3dd1-4e7c-a19f-174787e0905d\") " pod="kube-system/kube-proxy-5l8tk" Sep 10 06:55:41.519699 kubelet[2913]: I0910 06:55:41.519718 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab49d0d3-3dd1-4e7c-a19f-174787e0905d-xtables-lock\") pod \"kube-proxy-5l8tk\" (UID: \"ab49d0d3-3dd1-4e7c-a19f-174787e0905d\") " pod="kube-system/kube-proxy-5l8tk" Sep 10 06:55:41.520044 kubelet[2913]: I0910 06:55:41.519752 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtwdj\" (UniqueName: \"kubernetes.io/projected/ab49d0d3-3dd1-4e7c-a19f-174787e0905d-kube-api-access-wtwdj\") pod \"kube-proxy-5l8tk\" (UID: \"ab49d0d3-3dd1-4e7c-a19f-174787e0905d\") " pod="kube-system/kube-proxy-5l8tk" Sep 10 06:55:41.621050 kubelet[2913]: I0910 06:55:41.620977 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26dca6e5-342a-40c6-95ec-3caa158ed0f2-var-lib-calico\") pod \"tigera-operator-755d956888-cwplm\" (UID: \"26dca6e5-342a-40c6-95ec-3caa158ed0f2\") " pod="tigera-operator/tigera-operator-755d956888-cwplm" Sep 10 06:55:41.622657 kubelet[2913]: I0910 06:55:41.621232 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snfcr\" (UniqueName: \"kubernetes.io/projected/26dca6e5-342a-40c6-95ec-3caa158ed0f2-kube-api-access-snfcr\") pod \"tigera-operator-755d956888-cwplm\" (UID: \"26dca6e5-342a-40c6-95ec-3caa158ed0f2\") " pod="tigera-operator/tigera-operator-755d956888-cwplm" Sep 10 06:55:41.622443 systemd[1]: Created slice kubepods-besteffort-pod26dca6e5_342a_40c6_95ec_3caa158ed0f2.slice - libcontainer container kubepods-besteffort-pod26dca6e5_342a_40c6_95ec_3caa158ed0f2.slice. Sep 10 06:55:41.756928 containerd[1582]: time="2025-09-10T06:55:41.756173749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5l8tk,Uid:ab49d0d3-3dd1-4e7c-a19f-174787e0905d,Namespace:kube-system,Attempt:0,}" Sep 10 06:55:41.790912 containerd[1582]: time="2025-09-10T06:55:41.790822929Z" level=info msg="connecting to shim 8c6d07651c1124fc224e6abe9f7eedabe3494bf888b15bf12e906a3f787fe5df" address="unix:///run/containerd/s/dd45c2f2d75a7bd30ee1a25f8c924d29994c5521687acdd1b11ad7dd65b6b0c1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:55:41.853695 systemd[1]: Started cri-containerd-8c6d07651c1124fc224e6abe9f7eedabe3494bf888b15bf12e906a3f787fe5df.scope - libcontainer container 8c6d07651c1124fc224e6abe9f7eedabe3494bf888b15bf12e906a3f787fe5df. Sep 10 06:55:41.911415 containerd[1582]: time="2025-09-10T06:55:41.911324650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5l8tk,Uid:ab49d0d3-3dd1-4e7c-a19f-174787e0905d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c6d07651c1124fc224e6abe9f7eedabe3494bf888b15bf12e906a3f787fe5df\"" Sep 10 06:55:41.917962 containerd[1582]: time="2025-09-10T06:55:41.917886443Z" level=info msg="CreateContainer within sandbox \"8c6d07651c1124fc224e6abe9f7eedabe3494bf888b15bf12e906a3f787fe5df\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 06:55:41.934076 containerd[1582]: time="2025-09-10T06:55:41.933789012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-cwplm,Uid:26dca6e5-342a-40c6-95ec-3caa158ed0f2,Namespace:tigera-operator,Attempt:0,}" Sep 10 06:55:41.940050 containerd[1582]: time="2025-09-10T06:55:41.938472040Z" level=info msg="Container a4de3f53da98fe108bd2978bf0a2b6607bf4277a46db13c074a973183923739b: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:55:41.959488 containerd[1582]: time="2025-09-10T06:55:41.956087645Z" level=info msg="CreateContainer within sandbox \"8c6d07651c1124fc224e6abe9f7eedabe3494bf888b15bf12e906a3f787fe5df\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4de3f53da98fe108bd2978bf0a2b6607bf4277a46db13c074a973183923739b\"" Sep 10 06:55:41.969623 containerd[1582]: time="2025-09-10T06:55:41.961344225Z" level=info msg="StartContainer for \"a4de3f53da98fe108bd2978bf0a2b6607bf4277a46db13c074a973183923739b\"" Sep 10 06:55:41.969623 containerd[1582]: time="2025-09-10T06:55:41.964504954Z" level=info msg="connecting to shim a4de3f53da98fe108bd2978bf0a2b6607bf4277a46db13c074a973183923739b" address="unix:///run/containerd/s/dd45c2f2d75a7bd30ee1a25f8c924d29994c5521687acdd1b11ad7dd65b6b0c1" protocol=ttrpc version=3 Sep 10 06:55:42.003091 systemd[1]: Started cri-containerd-a4de3f53da98fe108bd2978bf0a2b6607bf4277a46db13c074a973183923739b.scope - libcontainer container a4de3f53da98fe108bd2978bf0a2b6607bf4277a46db13c074a973183923739b. Sep 10 06:55:42.017805 containerd[1582]: time="2025-09-10T06:55:42.017735723Z" level=info msg="connecting to shim b609dbe0b531d16f185ddf3901476a61b9cb22d7c1a556ea898992882d6fd547" address="unix:///run/containerd/s/151d5596a557cff9518fcaa9e9e9a5c8a89768ef6e52329d0a3bf8076a1f3511" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:55:42.073618 systemd[1]: Started cri-containerd-b609dbe0b531d16f185ddf3901476a61b9cb22d7c1a556ea898992882d6fd547.scope - libcontainer container b609dbe0b531d16f185ddf3901476a61b9cb22d7c1a556ea898992882d6fd547. Sep 10 06:55:42.142552 containerd[1582]: time="2025-09-10T06:55:42.142468006Z" level=info msg="StartContainer for \"a4de3f53da98fe108bd2978bf0a2b6607bf4277a46db13c074a973183923739b\" returns successfully" Sep 10 06:55:42.187007 containerd[1582]: time="2025-09-10T06:55:42.186937208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-cwplm,Uid:26dca6e5-342a-40c6-95ec-3caa158ed0f2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b609dbe0b531d16f185ddf3901476a61b9cb22d7c1a556ea898992882d6fd547\"" Sep 10 06:55:42.190014 containerd[1582]: time="2025-09-10T06:55:42.189970916Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 06:55:42.653974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829858515.mount: Deactivated successfully. Sep 10 06:55:42.840750 kubelet[2913]: I0910 06:55:42.840635 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5l8tk" podStartSLOduration=1.840603716 podStartE2EDuration="1.840603716s" podCreationTimestamp="2025-09-10 06:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 06:55:42.838419963 +0000 UTC m=+7.372473408" watchObservedRunningTime="2025-09-10 06:55:42.840603716 +0000 UTC m=+7.374657128" Sep 10 06:55:45.442590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98636604.mount: Deactivated successfully. Sep 10 06:55:47.391312 containerd[1582]: time="2025-09-10T06:55:47.389813689Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:47.393216 containerd[1582]: time="2025-09-10T06:55:47.393139487Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 10 06:55:47.394889 containerd[1582]: time="2025-09-10T06:55:47.394810774Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:47.401369 containerd[1582]: time="2025-09-10T06:55:47.401241144Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:55:47.403439 containerd[1582]: time="2025-09-10T06:55:47.403253032Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 5.21308252s" Sep 10 06:55:47.405781 containerd[1582]: time="2025-09-10T06:55:47.403443232Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 10 06:55:47.415407 containerd[1582]: time="2025-09-10T06:55:47.411806796Z" level=info msg="CreateContainer within sandbox \"b609dbe0b531d16f185ddf3901476a61b9cb22d7c1a556ea898992882d6fd547\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 06:55:47.430866 containerd[1582]: time="2025-09-10T06:55:47.430798472Z" level=info msg="Container 2a7960a9f299d4981ae9296ca0f6ef748ac5bccde01e1e2088589f72e4599191: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:55:47.435873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1870842476.mount: Deactivated successfully. Sep 10 06:55:47.451090 containerd[1582]: time="2025-09-10T06:55:47.451007767Z" level=info msg="CreateContainer within sandbox \"b609dbe0b531d16f185ddf3901476a61b9cb22d7c1a556ea898992882d6fd547\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2a7960a9f299d4981ae9296ca0f6ef748ac5bccde01e1e2088589f72e4599191\"" Sep 10 06:55:47.453386 containerd[1582]: time="2025-09-10T06:55:47.452403493Z" level=info msg="StartContainer for \"2a7960a9f299d4981ae9296ca0f6ef748ac5bccde01e1e2088589f72e4599191\"" Sep 10 06:55:47.454175 containerd[1582]: time="2025-09-10T06:55:47.453996093Z" level=info msg="connecting to shim 2a7960a9f299d4981ae9296ca0f6ef748ac5bccde01e1e2088589f72e4599191" address="unix:///run/containerd/s/151d5596a557cff9518fcaa9e9e9a5c8a89768ef6e52329d0a3bf8076a1f3511" protocol=ttrpc version=3 Sep 10 06:55:47.501625 systemd[1]: Started cri-containerd-2a7960a9f299d4981ae9296ca0f6ef748ac5bccde01e1e2088589f72e4599191.scope - libcontainer container 2a7960a9f299d4981ae9296ca0f6ef748ac5bccde01e1e2088589f72e4599191. Sep 10 06:55:47.568324 containerd[1582]: time="2025-09-10T06:55:47.568224738Z" level=info msg="StartContainer for \"2a7960a9f299d4981ae9296ca0f6ef748ac5bccde01e1e2088589f72e4599191\" returns successfully" Sep 10 06:55:47.855679 kubelet[2913]: I0910 06:55:47.855514 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-cwplm" podStartSLOduration=1.638222061 podStartE2EDuration="6.855378776s" podCreationTimestamp="2025-09-10 06:55:41 +0000 UTC" firstStartedPulling="2025-09-10 06:55:42.189358698 +0000 UTC m=+6.723412094" lastFinishedPulling="2025-09-10 06:55:47.406515412 +0000 UTC m=+11.940568809" observedRunningTime="2025-09-10 06:55:47.8549487 +0000 UTC m=+12.389002112" watchObservedRunningTime="2025-09-10 06:55:47.855378776 +0000 UTC m=+12.389432200" Sep 10 06:55:55.063584 sudo[1917]: pam_unix(sudo:session): session closed for user root Sep 10 06:55:55.209375 sshd[1916]: Connection closed by 139.178.89.65 port 48902 Sep 10 06:55:55.211172 sshd-session[1913]: pam_unix(sshd:session): session closed for user core Sep 10 06:55:55.223053 systemd[1]: sshd@8-10.244.28.170:22-139.178.89.65:48902.service: Deactivated successfully. Sep 10 06:55:55.228940 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 06:55:55.230394 systemd[1]: session-11.scope: Consumed 8.236s CPU time, 153.2M memory peak. Sep 10 06:55:55.234174 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Sep 10 06:55:55.239512 systemd-logind[1560]: Removed session 11. Sep 10 06:56:01.232011 kubelet[2913]: W0910 06:56:01.230293 2913 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-fpwqg.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-fpwqg.gb1.brightbox.com' and this object Sep 10 06:56:01.232011 kubelet[2913]: E0910 06:56:01.230399 2913 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-fpwqg.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-fpwqg.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 10 06:56:01.232011 kubelet[2913]: I0910 06:56:01.230872 2913 status_manager.go:890] "Failed to get status for pod" podUID="48af392b-8906-4b19-9bfa-0def6091e421" pod="calico-system/calico-typha-688dc84897-kns7s" err="pods \"calico-typha-688dc84897-kns7s\" is forbidden: User \"system:node:srv-fpwqg.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-fpwqg.gb1.brightbox.com' and this object" Sep 10 06:56:01.231578 systemd[1]: Created slice kubepods-besteffort-pod48af392b_8906_4b19_9bfa_0def6091e421.slice - libcontainer container kubepods-besteffort-pod48af392b_8906_4b19_9bfa_0def6091e421.slice. Sep 10 06:56:01.290382 kubelet[2913]: I0910 06:56:01.290258 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mf88\" (UniqueName: \"kubernetes.io/projected/48af392b-8906-4b19-9bfa-0def6091e421-kube-api-access-7mf88\") pod \"calico-typha-688dc84897-kns7s\" (UID: \"48af392b-8906-4b19-9bfa-0def6091e421\") " pod="calico-system/calico-typha-688dc84897-kns7s" Sep 10 06:56:01.290783 kubelet[2913]: I0910 06:56:01.290633 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/48af392b-8906-4b19-9bfa-0def6091e421-typha-certs\") pod \"calico-typha-688dc84897-kns7s\" (UID: \"48af392b-8906-4b19-9bfa-0def6091e421\") " pod="calico-system/calico-typha-688dc84897-kns7s" Sep 10 06:56:01.290783 kubelet[2913]: I0910 06:56:01.290713 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48af392b-8906-4b19-9bfa-0def6091e421-tigera-ca-bundle\") pod \"calico-typha-688dc84897-kns7s\" (UID: \"48af392b-8906-4b19-9bfa-0def6091e421\") " pod="calico-system/calico-typha-688dc84897-kns7s" Sep 10 06:56:01.528593 systemd[1]: Created slice kubepods-besteffort-pode9d68998_3a94_43d2_a1f4_e2c90a48b3cf.slice - libcontainer container kubepods-besteffort-pode9d68998_3a94_43d2_a1f4_e2c90a48b3cf.slice. Sep 10 06:56:01.592892 kubelet[2913]: I0910 06:56:01.592636 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-lib-modules\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.592892 kubelet[2913]: I0910 06:56:01.592718 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-var-lib-calico\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.592892 kubelet[2913]: I0910 06:56:01.592751 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-xtables-lock\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.592892 kubelet[2913]: I0910 06:56:01.592781 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-cni-net-dir\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.592892 kubelet[2913]: I0910 06:56:01.592818 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-node-certs\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.593460 kubelet[2913]: I0910 06:56:01.592846 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-policysync\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.593460 kubelet[2913]: I0910 06:56:01.592870 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-cni-bin-dir\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.593460 kubelet[2913]: I0910 06:56:01.592895 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-cni-log-dir\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.593460 kubelet[2913]: I0910 06:56:01.592923 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-tigera-ca-bundle\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.593460 kubelet[2913]: I0910 06:56:01.592953 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmrtz\" (UniqueName: \"kubernetes.io/projected/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-kube-api-access-mmrtz\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.593738 kubelet[2913]: I0910 06:56:01.592983 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-var-run-calico\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.593738 kubelet[2913]: I0910 06:56:01.593013 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e9d68998-3a94-43d2-a1f4-e2c90a48b3cf-flexvol-driver-host\") pod \"calico-node-pf8p7\" (UID: \"e9d68998-3a94-43d2-a1f4-e2c90a48b3cf\") " pod="calico-system/calico-node-pf8p7" Sep 10 06:56:01.700180 kubelet[2913]: E0910 06:56:01.698230 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.700180 kubelet[2913]: W0910 06:56:01.698273 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.700180 kubelet[2913]: E0910 06:56:01.699104 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.700180 kubelet[2913]: E0910 06:56:01.699463 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.700180 kubelet[2913]: W0910 06:56:01.699515 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.700180 kubelet[2913]: E0910 06:56:01.699536 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.700180 kubelet[2913]: E0910 06:56:01.699891 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.700180 kubelet[2913]: W0910 06:56:01.699908 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.700180 kubelet[2913]: E0910 06:56:01.699952 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.700866 kubelet[2913]: E0910 06:56:01.700453 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.700866 kubelet[2913]: W0910 06:56:01.700466 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.700866 kubelet[2913]: E0910 06:56:01.700522 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.707219 kubelet[2913]: E0910 06:56:01.707154 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.707404 kubelet[2913]: W0910 06:56:01.707378 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.707532 kubelet[2913]: E0910 06:56:01.707508 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.818657 kubelet[2913]: E0910 06:56:01.817649 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:01.871040 kubelet[2913]: E0910 06:56:01.870983 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.872526 kubelet[2913]: W0910 06:56:01.871369 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.872526 kubelet[2913]: E0910 06:56:01.871409 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.873024 kubelet[2913]: E0910 06:56:01.872945 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.873024 kubelet[2913]: W0910 06:56:01.872973 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.873024 kubelet[2913]: E0910 06:56:01.872993 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.874835 kubelet[2913]: E0910 06:56:01.873545 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.874835 kubelet[2913]: W0910 06:56:01.873564 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.874835 kubelet[2913]: E0910 06:56:01.873581 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.875180 kubelet[2913]: E0910 06:56:01.875139 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.875375 kubelet[2913]: W0910 06:56:01.875307 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.875375 kubelet[2913]: E0910 06:56:01.875336 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.876562 kubelet[2913]: E0910 06:56:01.876511 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.876972 kubelet[2913]: W0910 06:56:01.876948 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.877733 kubelet[2913]: E0910 06:56:01.877271 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.879048 kubelet[2913]: E0910 06:56:01.878996 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.879348 kubelet[2913]: W0910 06:56:01.879253 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.879348 kubelet[2913]: E0910 06:56:01.879278 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.880713 kubelet[2913]: E0910 06:56:01.880538 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.880713 kubelet[2913]: W0910 06:56:01.880561 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.880713 kubelet[2913]: E0910 06:56:01.880579 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.882035 kubelet[2913]: E0910 06:56:01.880984 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.882035 kubelet[2913]: W0910 06:56:01.880999 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.882035 kubelet[2913]: E0910 06:56:01.881269 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.882035 kubelet[2913]: E0910 06:56:01.881622 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.882035 kubelet[2913]: W0910 06:56:01.881639 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.882035 kubelet[2913]: E0910 06:56:01.881655 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.883537 kubelet[2913]: E0910 06:56:01.882583 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.883537 kubelet[2913]: W0910 06:56:01.883244 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.883537 kubelet[2913]: E0910 06:56:01.883263 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.884124 kubelet[2913]: E0910 06:56:01.884025 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.884124 kubelet[2913]: W0910 06:56:01.884043 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.884124 kubelet[2913]: E0910 06:56:01.884059 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.885433 kubelet[2913]: E0910 06:56:01.885302 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.885433 kubelet[2913]: W0910 06:56:01.885322 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.885433 kubelet[2913]: E0910 06:56:01.885339 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.885940 kubelet[2913]: E0910 06:56:01.885848 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.885940 kubelet[2913]: W0910 06:56:01.885867 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.885940 kubelet[2913]: E0910 06:56:01.885883 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.886701 kubelet[2913]: E0910 06:56:01.886610 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.886701 kubelet[2913]: W0910 06:56:01.886629 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.886701 kubelet[2913]: E0910 06:56:01.886644 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.887673 kubelet[2913]: E0910 06:56:01.887566 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.887673 kubelet[2913]: W0910 06:56:01.887586 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.887673 kubelet[2913]: E0910 06:56:01.887603 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.889239 kubelet[2913]: E0910 06:56:01.888547 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.889239 kubelet[2913]: W0910 06:56:01.888567 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.889239 kubelet[2913]: E0910 06:56:01.888583 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.889653 kubelet[2913]: E0910 06:56:01.889634 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.889882 kubelet[2913]: W0910 06:56:01.889738 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.889882 kubelet[2913]: E0910 06:56:01.889763 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.890260 kubelet[2913]: E0910 06:56:01.890091 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.890380 kubelet[2913]: W0910 06:56:01.890359 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.891294 kubelet[2913]: E0910 06:56:01.891177 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.891747 kubelet[2913]: E0910 06:56:01.891583 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.891747 kubelet[2913]: W0910 06:56:01.891601 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.891747 kubelet[2913]: E0910 06:56:01.891617 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.892227 kubelet[2913]: E0910 06:56:01.892003 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.892227 kubelet[2913]: W0910 06:56:01.892020 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.892227 kubelet[2913]: E0910 06:56:01.892036 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.895775 kubelet[2913]: E0910 06:56:01.895590 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.895775 kubelet[2913]: W0910 06:56:01.895619 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.895775 kubelet[2913]: E0910 06:56:01.895640 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.895775 kubelet[2913]: I0910 06:56:01.895686 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1c0c2c0d-f637-47eb-bc16-5e32512b9b12-varrun\") pod \"csi-node-driver-cqtk9\" (UID: \"1c0c2c0d-f637-47eb-bc16-5e32512b9b12\") " pod="calico-system/csi-node-driver-cqtk9" Sep 10 06:56:01.898463 kubelet[2913]: E0910 06:56:01.898279 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.898463 kubelet[2913]: W0910 06:56:01.898308 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.898463 kubelet[2913]: E0910 06:56:01.898341 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.898463 kubelet[2913]: I0910 06:56:01.898373 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1c0c2c0d-f637-47eb-bc16-5e32512b9b12-registration-dir\") pod \"csi-node-driver-cqtk9\" (UID: \"1c0c2c0d-f637-47eb-bc16-5e32512b9b12\") " pod="calico-system/csi-node-driver-cqtk9" Sep 10 06:56:01.899122 kubelet[2913]: E0910 06:56:01.898994 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.899122 kubelet[2913]: W0910 06:56:01.899019 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.899122 kubelet[2913]: E0910 06:56:01.899061 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.899122 kubelet[2913]: I0910 06:56:01.899100 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c0c2c0d-f637-47eb-bc16-5e32512b9b12-kubelet-dir\") pod \"csi-node-driver-cqtk9\" (UID: \"1c0c2c0d-f637-47eb-bc16-5e32512b9b12\") " pod="calico-system/csi-node-driver-cqtk9" Sep 10 06:56:01.901554 kubelet[2913]: E0910 06:56:01.901370 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.901554 kubelet[2913]: W0910 06:56:01.901393 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.901554 kubelet[2913]: E0910 06:56:01.901456 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.902209 kubelet[2913]: E0910 06:56:01.902040 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.902209 kubelet[2913]: W0910 06:56:01.902059 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.902209 kubelet[2913]: E0910 06:56:01.902099 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.903123 kubelet[2913]: E0910 06:56:01.902970 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.903123 kubelet[2913]: W0910 06:56:01.902989 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.903123 kubelet[2913]: E0910 06:56:01.903044 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.903123 kubelet[2913]: I0910 06:56:01.903084 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1c0c2c0d-f637-47eb-bc16-5e32512b9b12-socket-dir\") pod \"csi-node-driver-cqtk9\" (UID: \"1c0c2c0d-f637-47eb-bc16-5e32512b9b12\") " pod="calico-system/csi-node-driver-cqtk9" Sep 10 06:56:01.904059 kubelet[2913]: E0910 06:56:01.903967 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.904059 kubelet[2913]: W0910 06:56:01.903986 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.904059 kubelet[2913]: E0910 06:56:01.904025 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.904775 kubelet[2913]: E0910 06:56:01.904695 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.905049 kubelet[2913]: W0910 06:56:01.904856 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.905282 kubelet[2913]: E0910 06:56:01.904880 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.905567 kubelet[2913]: E0910 06:56:01.905540 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.905567 kubelet[2913]: W0910 06:56:01.905562 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.905826 kubelet[2913]: E0910 06:56:01.905587 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.905826 kubelet[2913]: E0910 06:56:01.905815 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.906271 kubelet[2913]: W0910 06:56:01.905829 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.906271 kubelet[2913]: E0910 06:56:01.905856 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.906271 kubelet[2913]: I0910 06:56:01.905888 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgfpk\" (UniqueName: \"kubernetes.io/projected/1c0c2c0d-f637-47eb-bc16-5e32512b9b12-kube-api-access-hgfpk\") pod \"csi-node-driver-cqtk9\" (UID: \"1c0c2c0d-f637-47eb-bc16-5e32512b9b12\") " pod="calico-system/csi-node-driver-cqtk9" Sep 10 06:56:01.907005 kubelet[2913]: E0910 06:56:01.906903 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.907005 kubelet[2913]: W0910 06:56:01.906922 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.907005 kubelet[2913]: E0910 06:56:01.906938 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.907323 kubelet[2913]: E0910 06:56:01.907301 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.907323 kubelet[2913]: W0910 06:56:01.907322 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.907498 kubelet[2913]: E0910 06:56:01.907346 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.907882 kubelet[2913]: E0910 06:56:01.907567 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.907882 kubelet[2913]: W0910 06:56:01.907581 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.907882 kubelet[2913]: E0910 06:56:01.907596 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.907882 kubelet[2913]: E0910 06:56:01.907812 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.907882 kubelet[2913]: W0910 06:56:01.907826 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.907882 kubelet[2913]: E0910 06:56:01.907839 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:01.908372 kubelet[2913]: E0910 06:56:01.908357 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:01.908372 kubelet[2913]: W0910 06:56:01.908371 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:01.909050 kubelet[2913]: E0910 06:56:01.908386 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.008025 kubelet[2913]: E0910 06:56:02.007975 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.008025 kubelet[2913]: W0910 06:56:02.008012 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.008025 kubelet[2913]: E0910 06:56:02.008046 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.009424 kubelet[2913]: E0910 06:56:02.009398 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.009424 kubelet[2913]: W0910 06:56:02.009421 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.009730 kubelet[2913]: E0910 06:56:02.009450 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.009793 kubelet[2913]: E0910 06:56:02.009757 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.009793 kubelet[2913]: W0910 06:56:02.009772 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.009971 kubelet[2913]: E0910 06:56:02.009942 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.010138 kubelet[2913]: E0910 06:56:02.010008 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.010418 kubelet[2913]: W0910 06:56:02.010237 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.010418 kubelet[2913]: E0910 06:56:02.010261 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.010891 kubelet[2913]: E0910 06:56:02.010872 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.011521 kubelet[2913]: W0910 06:56:02.011409 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.011521 kubelet[2913]: E0910 06:56:02.011456 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.012002 kubelet[2913]: E0910 06:56:02.011910 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.012002 kubelet[2913]: W0910 06:56:02.011930 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.012002 kubelet[2913]: E0910 06:56:02.011969 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.012528 kubelet[2913]: E0910 06:56:02.012435 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.012528 kubelet[2913]: W0910 06:56:02.012455 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.012528 kubelet[2913]: E0910 06:56:02.012493 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.013550 kubelet[2913]: E0910 06:56:02.013458 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.013550 kubelet[2913]: W0910 06:56:02.013478 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.013550 kubelet[2913]: E0910 06:56:02.013521 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.014126 kubelet[2913]: E0910 06:56:02.013943 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.014126 kubelet[2913]: W0910 06:56:02.013961 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.014126 kubelet[2913]: E0910 06:56:02.014002 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.014759 kubelet[2913]: E0910 06:56:02.014640 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.014759 kubelet[2913]: W0910 06:56:02.014659 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.014759 kubelet[2913]: E0910 06:56:02.014699 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.015454 kubelet[2913]: E0910 06:56:02.015367 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.015454 kubelet[2913]: W0910 06:56:02.015386 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.015579 kubelet[2913]: E0910 06:56:02.015496 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.016024 kubelet[2913]: E0910 06:56:02.015936 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.016024 kubelet[2913]: W0910 06:56:02.015956 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.016024 kubelet[2913]: E0910 06:56:02.015999 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.016556 kubelet[2913]: E0910 06:56:02.016468 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.016556 kubelet[2913]: W0910 06:56:02.016486 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.016802 kubelet[2913]: E0910 06:56:02.016758 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.016977 kubelet[2913]: E0910 06:56:02.016934 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.016977 kubelet[2913]: W0910 06:56:02.016953 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.017316 kubelet[2913]: E0910 06:56:02.017176 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.017561 kubelet[2913]: E0910 06:56:02.017431 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.017561 kubelet[2913]: W0910 06:56:02.017444 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.017807 kubelet[2913]: E0910 06:56:02.017695 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.018149 kubelet[2913]: E0910 06:56:02.018129 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.019226 kubelet[2913]: W0910 06:56:02.018267 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.019533 kubelet[2913]: E0910 06:56:02.019509 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.019717 kubelet[2913]: E0910 06:56:02.019664 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.019874 kubelet[2913]: W0910 06:56:02.019801 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.019874 kubelet[2913]: E0910 06:56:02.019859 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.020407 kubelet[2913]: E0910 06:56:02.020274 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.020407 kubelet[2913]: W0910 06:56:02.020294 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.020918 kubelet[2913]: E0910 06:56:02.020703 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.020918 kubelet[2913]: E0910 06:56:02.020733 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.020918 kubelet[2913]: W0910 06:56:02.020747 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.020918 kubelet[2913]: E0910 06:56:02.020763 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.021514 kubelet[2913]: E0910 06:56:02.021494 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.021636 kubelet[2913]: W0910 06:56:02.021614 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.021785 kubelet[2913]: E0910 06:56:02.021730 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.022590 kubelet[2913]: E0910 06:56:02.022571 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.022722 kubelet[2913]: W0910 06:56:02.022694 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.023638 kubelet[2913]: E0910 06:56:02.022941 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.023938 kubelet[2913]: E0910 06:56:02.023918 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.025233 kubelet[2913]: W0910 06:56:02.024974 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.025233 kubelet[2913]: E0910 06:56:02.025053 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.025799 kubelet[2913]: E0910 06:56:02.025675 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.025799 kubelet[2913]: W0910 06:56:02.025696 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.025799 kubelet[2913]: E0910 06:56:02.025733 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.026408 kubelet[2913]: E0910 06:56:02.026294 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.026408 kubelet[2913]: W0910 06:56:02.026314 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.026408 kubelet[2913]: E0910 06:56:02.026349 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.026827 kubelet[2913]: E0910 06:56:02.026807 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.027750 kubelet[2913]: W0910 06:56:02.026911 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.027750 kubelet[2913]: E0910 06:56:02.026937 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.412516 kubelet[2913]: E0910 06:56:02.412445 2913 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 10 06:56:02.412516 kubelet[2913]: E0910 06:56:02.412515 2913 projected.go:194] Error preparing data for projected volume kube-api-access-7mf88 for pod calico-system/calico-typha-688dc84897-kns7s: failed to sync configmap cache: timed out waiting for the condition Sep 10 06:56:02.413286 kubelet[2913]: E0910 06:56:02.412657 2913 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48af392b-8906-4b19-9bfa-0def6091e421-kube-api-access-7mf88 podName:48af392b-8906-4b19-9bfa-0def6091e421 nodeName:}" failed. No retries permitted until 2025-09-10 06:56:02.912602448 +0000 UTC m=+27.446655846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7mf88" (UniqueName: "kubernetes.io/projected/48af392b-8906-4b19-9bfa-0def6091e421-kube-api-access-7mf88") pod "calico-typha-688dc84897-kns7s" (UID: "48af392b-8906-4b19-9bfa-0def6091e421") : failed to sync configmap cache: timed out waiting for the condition Sep 10 06:56:02.419898 kubelet[2913]: E0910 06:56:02.419783 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.419898 kubelet[2913]: W0910 06:56:02.419811 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.419898 kubelet[2913]: E0910 06:56:02.419837 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.447436 kubelet[2913]: E0910 06:56:02.447396 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.447436 kubelet[2913]: W0910 06:56:02.447430 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.448411 kubelet[2913]: E0910 06:56:02.447468 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.448411 kubelet[2913]: E0910 06:56:02.447732 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.448411 kubelet[2913]: W0910 06:56:02.447745 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.448411 kubelet[2913]: E0910 06:56:02.447760 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.521240 kubelet[2913]: E0910 06:56:02.521130 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.521240 kubelet[2913]: W0910 06:56:02.521170 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.521607 kubelet[2913]: E0910 06:56:02.521276 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.623088 kubelet[2913]: E0910 06:56:02.623023 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.623088 kubelet[2913]: W0910 06:56:02.623059 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.623088 kubelet[2913]: E0910 06:56:02.623088 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.724350 kubelet[2913]: E0910 06:56:02.724110 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.725267 kubelet[2913]: W0910 06:56:02.725076 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.725267 kubelet[2913]: E0910 06:56:02.725118 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.736088 containerd[1582]: time="2025-09-10T06:56:02.735678904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pf8p7,Uid:e9d68998-3a94-43d2-a1f4-e2c90a48b3cf,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:02.812538 containerd[1582]: time="2025-09-10T06:56:02.812393684Z" level=info msg="connecting to shim d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d" address="unix:///run/containerd/s/d74186429200248dc0a7bfcbcfd02e16293758b9e301b00122f07fd69c1a0ea2" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:02.827145 kubelet[2913]: E0910 06:56:02.826899 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.827145 kubelet[2913]: W0910 06:56:02.826935 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.827145 kubelet[2913]: E0910 06:56:02.827078 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.850689 systemd[1]: Started cri-containerd-d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d.scope - libcontainer container d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d. Sep 10 06:56:02.928758 kubelet[2913]: E0910 06:56:02.928678 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.929237 kubelet[2913]: W0910 06:56:02.928722 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.929237 kubelet[2913]: E0910 06:56:02.928963 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.929730 kubelet[2913]: E0910 06:56:02.929657 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.929730 kubelet[2913]: W0910 06:56:02.929694 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.929730 kubelet[2913]: E0910 06:56:02.929724 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.932103 kubelet[2913]: E0910 06:56:02.932077 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.932103 kubelet[2913]: W0910 06:56:02.932101 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.932948 kubelet[2913]: E0910 06:56:02.932122 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.932948 kubelet[2913]: E0910 06:56:02.932760 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.933462 kubelet[2913]: W0910 06:56:02.933007 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.933672 kubelet[2913]: E0910 06:56:02.933033 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.935353 kubelet[2913]: E0910 06:56:02.935150 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.935449 kubelet[2913]: W0910 06:56:02.935227 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.935608 kubelet[2913]: E0910 06:56:02.935555 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.953341 containerd[1582]: time="2025-09-10T06:56:02.951117741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pf8p7,Uid:e9d68998-3a94-43d2-a1f4-e2c90a48b3cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d\"" Sep 10 06:56:02.958869 kubelet[2913]: E0910 06:56:02.958814 2913 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 06:56:02.958869 kubelet[2913]: W0910 06:56:02.958847 2913 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 06:56:02.959078 kubelet[2913]: E0910 06:56:02.958883 2913 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 06:56:02.963412 containerd[1582]: time="2025-09-10T06:56:02.963278141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 06:56:03.042647 containerd[1582]: time="2025-09-10T06:56:03.042575121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-688dc84897-kns7s,Uid:48af392b-8906-4b19-9bfa-0def6091e421,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:03.168114 containerd[1582]: time="2025-09-10T06:56:03.168032655Z" level=info msg="connecting to shim 273b478e06b50234134707bf0df7873121ec057a3d109db83e3e92a8c71e61b3" address="unix:///run/containerd/s/f848f37efada32845bde7e2fa8c4ecf975d8d07bd9ea6685406b17ad789d2dbd" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:03.233508 systemd[1]: Started cri-containerd-273b478e06b50234134707bf0df7873121ec057a3d109db83e3e92a8c71e61b3.scope - libcontainer container 273b478e06b50234134707bf0df7873121ec057a3d109db83e3e92a8c71e61b3. Sep 10 06:56:03.409514 containerd[1582]: time="2025-09-10T06:56:03.408871922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-688dc84897-kns7s,Uid:48af392b-8906-4b19-9bfa-0def6091e421,Namespace:calico-system,Attempt:0,} returns sandbox id \"273b478e06b50234134707bf0df7873121ec057a3d109db83e3e92a8c71e61b3\"" Sep 10 06:56:03.731473 kubelet[2913]: E0910 06:56:03.731130 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:04.686922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317976361.mount: Deactivated successfully. Sep 10 06:56:04.817280 containerd[1582]: time="2025-09-10T06:56:04.816247580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:04.817280 containerd[1582]: time="2025-09-10T06:56:04.817236740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 10 06:56:04.817948 containerd[1582]: time="2025-09-10T06:56:04.817914915Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:04.820469 containerd[1582]: time="2025-09-10T06:56:04.820432830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:04.821512 containerd[1582]: time="2025-09-10T06:56:04.821477217Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.858108395s" Sep 10 06:56:04.821669 containerd[1582]: time="2025-09-10T06:56:04.821641891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 10 06:56:04.823436 containerd[1582]: time="2025-09-10T06:56:04.823404477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 06:56:04.826304 containerd[1582]: time="2025-09-10T06:56:04.826240752Z" level=info msg="CreateContainer within sandbox \"d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 06:56:04.843599 containerd[1582]: time="2025-09-10T06:56:04.843533440Z" level=info msg="Container e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:04.862048 containerd[1582]: time="2025-09-10T06:56:04.861994745Z" level=info msg="CreateContainer within sandbox \"d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421\"" Sep 10 06:56:04.864874 containerd[1582]: time="2025-09-10T06:56:04.864836000Z" level=info msg="StartContainer for \"e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421\"" Sep 10 06:56:04.867644 containerd[1582]: time="2025-09-10T06:56:04.867535538Z" level=info msg="connecting to shim e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421" address="unix:///run/containerd/s/d74186429200248dc0a7bfcbcfd02e16293758b9e301b00122f07fd69c1a0ea2" protocol=ttrpc version=3 Sep 10 06:56:04.904494 systemd[1]: Started cri-containerd-e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421.scope - libcontainer container e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421. Sep 10 06:56:05.021713 systemd[1]: cri-containerd-e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421.scope: Deactivated successfully. Sep 10 06:56:05.034532 containerd[1582]: time="2025-09-10T06:56:05.034461575Z" level=info msg="received exit event container_id:\"e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421\" id:\"e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421\" pid:3518 exited_at:{seconds:1757487365 nanos:25866805}" Sep 10 06:56:05.051614 containerd[1582]: time="2025-09-10T06:56:05.051291345Z" level=info msg="StartContainer for \"e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421\" returns successfully" Sep 10 06:56:05.077323 containerd[1582]: time="2025-09-10T06:56:05.077253178Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421\" id:\"e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421\" pid:3518 exited_at:{seconds:1757487365 nanos:25866805}" Sep 10 06:56:05.110013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4824c6f1a046e5dd88a650ac363ba8a61e339a7f665cb59039eb0ed0a502421-rootfs.mount: Deactivated successfully. Sep 10 06:56:05.731817 kubelet[2913]: E0910 06:56:05.731301 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:07.732642 kubelet[2913]: E0910 06:56:07.732520 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:08.912243 containerd[1582]: time="2025-09-10T06:56:08.912163946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:08.918547 containerd[1582]: time="2025-09-10T06:56:08.914810776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 10 06:56:08.918547 containerd[1582]: time="2025-09-10T06:56:08.916048620Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:08.920552 containerd[1582]: time="2025-09-10T06:56:08.920446163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:08.921562 containerd[1582]: time="2025-09-10T06:56:08.921307568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 4.095579459s" Sep 10 06:56:08.921562 containerd[1582]: time="2025-09-10T06:56:08.921358819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 10 06:56:08.924344 containerd[1582]: time="2025-09-10T06:56:08.923863589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 06:56:08.947510 containerd[1582]: time="2025-09-10T06:56:08.947437654Z" level=info msg="CreateContainer within sandbox \"273b478e06b50234134707bf0df7873121ec057a3d109db83e3e92a8c71e61b3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 06:56:08.963769 containerd[1582]: time="2025-09-10T06:56:08.962626265Z" level=info msg="Container 6f504085e2cb9251a7b81e913487bd097b31ff84a0bd7a4a480c19049d1bf79e: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:08.967001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818330403.mount: Deactivated successfully. Sep 10 06:56:08.977719 containerd[1582]: time="2025-09-10T06:56:08.977669669Z" level=info msg="CreateContainer within sandbox \"273b478e06b50234134707bf0df7873121ec057a3d109db83e3e92a8c71e61b3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6f504085e2cb9251a7b81e913487bd097b31ff84a0bd7a4a480c19049d1bf79e\"" Sep 10 06:56:08.980224 containerd[1582]: time="2025-09-10T06:56:08.979439197Z" level=info msg="StartContainer for \"6f504085e2cb9251a7b81e913487bd097b31ff84a0bd7a4a480c19049d1bf79e\"" Sep 10 06:56:08.988071 containerd[1582]: time="2025-09-10T06:56:08.988001538Z" level=info msg="connecting to shim 6f504085e2cb9251a7b81e913487bd097b31ff84a0bd7a4a480c19049d1bf79e" address="unix:///run/containerd/s/f848f37efada32845bde7e2fa8c4ecf975d8d07bd9ea6685406b17ad789d2dbd" protocol=ttrpc version=3 Sep 10 06:56:09.022440 systemd[1]: Started cri-containerd-6f504085e2cb9251a7b81e913487bd097b31ff84a0bd7a4a480c19049d1bf79e.scope - libcontainer container 6f504085e2cb9251a7b81e913487bd097b31ff84a0bd7a4a480c19049d1bf79e. Sep 10 06:56:09.114661 containerd[1582]: time="2025-09-10T06:56:09.114588277Z" level=info msg="StartContainer for \"6f504085e2cb9251a7b81e913487bd097b31ff84a0bd7a4a480c19049d1bf79e\" returns successfully" Sep 10 06:56:09.732824 kubelet[2913]: E0910 06:56:09.732017 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:10.021887 kubelet[2913]: I0910 06:56:10.021636 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-688dc84897-kns7s" podStartSLOduration=3.509438159 podStartE2EDuration="9.019617073s" podCreationTimestamp="2025-09-10 06:56:01 +0000 UTC" firstStartedPulling="2025-09-10 06:56:03.41323722 +0000 UTC m=+27.947290625" lastFinishedPulling="2025-09-10 06:56:08.92341613 +0000 UTC m=+33.457469539" observedRunningTime="2025-09-10 06:56:10.018477587 +0000 UTC m=+34.552530997" watchObservedRunningTime="2025-09-10 06:56:10.019617073 +0000 UTC m=+34.553670478" Sep 10 06:56:10.996528 kubelet[2913]: I0910 06:56:10.996414 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 06:56:11.731610 kubelet[2913]: E0910 06:56:11.731006 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:13.731792 kubelet[2913]: E0910 06:56:13.731734 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:13.894843 containerd[1582]: time="2025-09-10T06:56:13.894757106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:13.897512 containerd[1582]: time="2025-09-10T06:56:13.897460903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 10 06:56:13.899584 containerd[1582]: time="2025-09-10T06:56:13.899349253Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:13.904599 containerd[1582]: time="2025-09-10T06:56:13.904532612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:13.906731 containerd[1582]: time="2025-09-10T06:56:13.906511316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.982602379s" Sep 10 06:56:13.906731 containerd[1582]: time="2025-09-10T06:56:13.906570656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 10 06:56:13.943338 containerd[1582]: time="2025-09-10T06:56:13.943174496Z" level=info msg="CreateContainer within sandbox \"d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 06:56:13.956229 containerd[1582]: time="2025-09-10T06:56:13.956154063Z" level=info msg="Container edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:13.966438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986847965.mount: Deactivated successfully. Sep 10 06:56:13.977503 containerd[1582]: time="2025-09-10T06:56:13.977409667Z" level=info msg="CreateContainer within sandbox \"d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6\"" Sep 10 06:56:13.984052 containerd[1582]: time="2025-09-10T06:56:13.983240841Z" level=info msg="StartContainer for \"edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6\"" Sep 10 06:56:13.987730 containerd[1582]: time="2025-09-10T06:56:13.987685142Z" level=info msg="connecting to shim edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6" address="unix:///run/containerd/s/d74186429200248dc0a7bfcbcfd02e16293758b9e301b00122f07fd69c1a0ea2" protocol=ttrpc version=3 Sep 10 06:56:14.007437 kubelet[2913]: I0910 06:56:14.006412 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 06:56:14.053805 systemd[1]: Started cri-containerd-edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6.scope - libcontainer container edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6. Sep 10 06:56:14.237577 containerd[1582]: time="2025-09-10T06:56:14.236804769Z" level=info msg="StartContainer for \"edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6\" returns successfully" Sep 10 06:56:15.367891 systemd[1]: cri-containerd-edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6.scope: Deactivated successfully. Sep 10 06:56:15.368752 systemd[1]: cri-containerd-edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6.scope: Consumed 833ms CPU time, 167.5M memory peak, 6.1M read from disk, 171.3M written to disk. Sep 10 06:56:15.459438 kubelet[2913]: I0910 06:56:15.459343 2913 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 06:56:15.491959 containerd[1582]: time="2025-09-10T06:56:15.491348866Z" level=info msg="received exit event container_id:\"edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6\" id:\"edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6\" pid:3619 exited_at:{seconds:1757487375 nanos:480608947}" Sep 10 06:56:15.493628 containerd[1582]: time="2025-09-10T06:56:15.493522167Z" level=info msg="TaskExit event in podsandbox handler container_id:\"edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6\" id:\"edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6\" pid:3619 exited_at:{seconds:1757487375 nanos:480608947}" Sep 10 06:56:15.556608 systemd[1]: Created slice kubepods-burstable-pod5783b913_fad6_49b5_a875_5706a6c8ba35.slice - libcontainer container kubepods-burstable-pod5783b913_fad6_49b5_a875_5706a6c8ba35.slice. Sep 10 06:56:15.582351 kubelet[2913]: I0910 06:56:15.581592 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b7949e4-64e9-4bb8-bda2-569ab0826025-config-volume\") pod \"coredns-668d6bf9bc-scp8z\" (UID: \"4b7949e4-64e9-4bb8-bda2-569ab0826025\") " pod="kube-system/coredns-668d6bf9bc-scp8z" Sep 10 06:56:15.582351 kubelet[2913]: I0910 06:56:15.581769 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5783b913-fad6-49b5-a875-5706a6c8ba35-config-volume\") pod \"coredns-668d6bf9bc-dx6ll\" (UID: \"5783b913-fad6-49b5-a875-5706a6c8ba35\") " pod="kube-system/coredns-668d6bf9bc-dx6ll" Sep 10 06:56:15.582351 kubelet[2913]: I0910 06:56:15.581919 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85pq\" (UniqueName: \"kubernetes.io/projected/4b7949e4-64e9-4bb8-bda2-569ab0826025-kube-api-access-d85pq\") pod \"coredns-668d6bf9bc-scp8z\" (UID: \"4b7949e4-64e9-4bb8-bda2-569ab0826025\") " pod="kube-system/coredns-668d6bf9bc-scp8z" Sep 10 06:56:15.582351 kubelet[2913]: I0910 06:56:15.581964 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b-calico-apiserver-certs\") pod \"calico-apiserver-6f7fbdcc55-nqblr\" (UID: \"0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b\") " pod="calico-apiserver/calico-apiserver-6f7fbdcc55-nqblr" Sep 10 06:56:15.584758 kubelet[2913]: I0910 06:56:15.584051 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7bs\" (UniqueName: \"kubernetes.io/projected/5783b913-fad6-49b5-a875-5706a6c8ba35-kube-api-access-zx7bs\") pod \"coredns-668d6bf9bc-dx6ll\" (UID: \"5783b913-fad6-49b5-a875-5706a6c8ba35\") " pod="kube-system/coredns-668d6bf9bc-dx6ll" Sep 10 06:56:15.593716 kubelet[2913]: I0910 06:56:15.591570 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf2tj\" (UniqueName: \"kubernetes.io/projected/0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b-kube-api-access-kf2tj\") pod \"calico-apiserver-6f7fbdcc55-nqblr\" (UID: \"0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b\") " pod="calico-apiserver/calico-apiserver-6f7fbdcc55-nqblr" Sep 10 06:56:15.623849 systemd[1]: Created slice kubepods-burstable-pod4b7949e4_64e9_4bb8_bda2_569ab0826025.slice - libcontainer container kubepods-burstable-pod4b7949e4_64e9_4bb8_bda2_569ab0826025.slice. Sep 10 06:56:15.649892 systemd[1]: Created slice kubepods-besteffort-pod0b31fdb8_1f9c_4766_89d1_d243e0a8ca9b.slice - libcontainer container kubepods-besteffort-pod0b31fdb8_1f9c_4766_89d1_d243e0a8ca9b.slice. Sep 10 06:56:15.671035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edd8e1f93dbd086f2e20610c6268980cdb1e0bfe360b7e54aa572c28e3d41ff6-rootfs.mount: Deactivated successfully. Sep 10 06:56:15.678645 systemd[1]: Created slice kubepods-besteffort-pod15799f56_9106_476a_802d_a16e19297b9b.slice - libcontainer container kubepods-besteffort-pod15799f56_9106_476a_802d_a16e19297b9b.slice. Sep 10 06:56:15.705635 kubelet[2913]: I0910 06:56:15.692264 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15799f56-9106-476a-802d-a16e19297b9b-whisker-backend-key-pair\") pod \"whisker-6578f45564-7zsdk\" (UID: \"15799f56-9106-476a-802d-a16e19297b9b\") " pod="calico-system/whisker-6578f45564-7zsdk" Sep 10 06:56:15.705635 kubelet[2913]: I0910 06:56:15.692323 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fgnp\" (UniqueName: \"kubernetes.io/projected/dc916d2a-3aec-44aa-a1d1-e2d88a05a93b-kube-api-access-2fgnp\") pod \"goldmane-54d579b49d-b94rd\" (UID: \"dc916d2a-3aec-44aa-a1d1-e2d88a05a93b\") " pod="calico-system/goldmane-54d579b49d-b94rd" Sep 10 06:56:15.705635 kubelet[2913]: I0910 06:56:15.692385 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86-tigera-ca-bundle\") pod \"calico-kube-controllers-5986898b57-mfkdt\" (UID: \"2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86\") " pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" Sep 10 06:56:15.705635 kubelet[2913]: I0910 06:56:15.692435 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b5cb7d1c-6f0d-4131-b03d-fe53b52de241-calico-apiserver-certs\") pod \"calico-apiserver-6f7fbdcc55-tw5d8\" (UID: \"b5cb7d1c-6f0d-4131-b03d-fe53b52de241\") " pod="calico-apiserver/calico-apiserver-6f7fbdcc55-tw5d8" Sep 10 06:56:15.705635 kubelet[2913]: I0910 06:56:15.692557 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thlp2\" (UniqueName: \"kubernetes.io/projected/2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86-kube-api-access-thlp2\") pod \"calico-kube-controllers-5986898b57-mfkdt\" (UID: \"2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86\") " pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" Sep 10 06:56:15.693570 systemd[1]: Created slice kubepods-besteffort-podb5cb7d1c_6f0d_4131_b03d_fe53b52de241.slice - libcontainer container kubepods-besteffort-podb5cb7d1c_6f0d_4131_b03d_fe53b52de241.slice. Sep 10 06:56:15.709984 kubelet[2913]: I0910 06:56:15.692602 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxv7j\" (UniqueName: \"kubernetes.io/projected/b5cb7d1c-6f0d-4131-b03d-fe53b52de241-kube-api-access-dxv7j\") pod \"calico-apiserver-6f7fbdcc55-tw5d8\" (UID: \"b5cb7d1c-6f0d-4131-b03d-fe53b52de241\") " pod="calico-apiserver/calico-apiserver-6f7fbdcc55-tw5d8" Sep 10 06:56:15.709984 kubelet[2913]: I0910 06:56:15.692630 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc916d2a-3aec-44aa-a1d1-e2d88a05a93b-config\") pod \"goldmane-54d579b49d-b94rd\" (UID: \"dc916d2a-3aec-44aa-a1d1-e2d88a05a93b\") " pod="calico-system/goldmane-54d579b49d-b94rd" Sep 10 06:56:15.709984 kubelet[2913]: I0910 06:56:15.692657 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/dc916d2a-3aec-44aa-a1d1-e2d88a05a93b-goldmane-key-pair\") pod \"goldmane-54d579b49d-b94rd\" (UID: \"dc916d2a-3aec-44aa-a1d1-e2d88a05a93b\") " pod="calico-system/goldmane-54d579b49d-b94rd" Sep 10 06:56:15.709984 kubelet[2913]: I0910 06:56:15.692705 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15799f56-9106-476a-802d-a16e19297b9b-whisker-ca-bundle\") pod \"whisker-6578f45564-7zsdk\" (UID: \"15799f56-9106-476a-802d-a16e19297b9b\") " pod="calico-system/whisker-6578f45564-7zsdk" Sep 10 06:56:15.709984 kubelet[2913]: I0910 06:56:15.692733 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc916d2a-3aec-44aa-a1d1-e2d88a05a93b-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-b94rd\" (UID: \"dc916d2a-3aec-44aa-a1d1-e2d88a05a93b\") " pod="calico-system/goldmane-54d579b49d-b94rd" Sep 10 06:56:15.715146 kubelet[2913]: I0910 06:56:15.692770 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs49g\" (UniqueName: \"kubernetes.io/projected/15799f56-9106-476a-802d-a16e19297b9b-kube-api-access-rs49g\") pod \"whisker-6578f45564-7zsdk\" (UID: \"15799f56-9106-476a-802d-a16e19297b9b\") " pod="calico-system/whisker-6578f45564-7zsdk" Sep 10 06:56:15.711933 systemd[1]: Created slice kubepods-besteffort-poddc916d2a_3aec_44aa_a1d1_e2d88a05a93b.slice - libcontainer container kubepods-besteffort-poddc916d2a_3aec_44aa_a1d1_e2d88a05a93b.slice. Sep 10 06:56:15.736581 systemd[1]: Created slice kubepods-besteffort-pod2ccd7dc8_b3cc_4530_a40f_ef06ec4a8e86.slice - libcontainer container kubepods-besteffort-pod2ccd7dc8_b3cc_4530_a40f_ef06ec4a8e86.slice. Sep 10 06:56:15.830230 systemd[1]: Created slice kubepods-besteffort-pod1c0c2c0d_f637_47eb_bc16_5e32512b9b12.slice - libcontainer container kubepods-besteffort-pod1c0c2c0d_f637_47eb_bc16_5e32512b9b12.slice. Sep 10 06:56:15.865223 containerd[1582]: time="2025-09-10T06:56:15.862927391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cqtk9,Uid:1c0c2c0d-f637-47eb-bc16-5e32512b9b12,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:15.929841 containerd[1582]: time="2025-09-10T06:56:15.928655470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dx6ll,Uid:5783b913-fad6-49b5-a875-5706a6c8ba35,Namespace:kube-system,Attempt:0,}" Sep 10 06:56:15.934673 containerd[1582]: time="2025-09-10T06:56:15.934640778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-scp8z,Uid:4b7949e4-64e9-4bb8-bda2-569ab0826025,Namespace:kube-system,Attempt:0,}" Sep 10 06:56:15.970487 containerd[1582]: time="2025-09-10T06:56:15.970344458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-nqblr,Uid:0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b,Namespace:calico-apiserver,Attempt:0,}" Sep 10 06:56:16.014231 containerd[1582]: time="2025-09-10T06:56:16.014134844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6578f45564-7zsdk,Uid:15799f56-9106-476a-802d-a16e19297b9b,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:16.019359 containerd[1582]: time="2025-09-10T06:56:16.019319852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-tw5d8,Uid:b5cb7d1c-6f0d-4131-b03d-fe53b52de241,Namespace:calico-apiserver,Attempt:0,}" Sep 10 06:56:16.087595 containerd[1582]: time="2025-09-10T06:56:16.086313775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 06:56:16.098288 containerd[1582]: time="2025-09-10T06:56:16.097357372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5986898b57-mfkdt,Uid:2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:16.124989 containerd[1582]: time="2025-09-10T06:56:16.124763554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b94rd,Uid:dc916d2a-3aec-44aa-a1d1-e2d88a05a93b,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:16.423371 containerd[1582]: time="2025-09-10T06:56:16.423291837Z" level=error msg="Failed to destroy network for sandbox \"15fedb616eaa751aa2a5c31c31f1a07279410507da5491a9a1e6d37032d0c09f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.428472 containerd[1582]: time="2025-09-10T06:56:16.428399512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dx6ll,Uid:5783b913-fad6-49b5-a875-5706a6c8ba35,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fedb616eaa751aa2a5c31c31f1a07279410507da5491a9a1e6d37032d0c09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.430086 containerd[1582]: time="2025-09-10T06:56:16.430039408Z" level=error msg="Failed to destroy network for sandbox \"43fb5be58fa63fe4935d88af489219130a71199d19898a973480b2f480407700\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.430659 kubelet[2913]: E0910 06:56:16.430563 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fedb616eaa751aa2a5c31c31f1a07279410507da5491a9a1e6d37032d0c09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.430878 kubelet[2913]: E0910 06:56:16.430707 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fedb616eaa751aa2a5c31c31f1a07279410507da5491a9a1e6d37032d0c09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dx6ll" Sep 10 06:56:16.430878 kubelet[2913]: E0910 06:56:16.430759 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15fedb616eaa751aa2a5c31c31f1a07279410507da5491a9a1e6d37032d0c09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dx6ll" Sep 10 06:56:16.430878 kubelet[2913]: E0910 06:56:16.430842 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dx6ll_kube-system(5783b913-fad6-49b5-a875-5706a6c8ba35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dx6ll_kube-system(5783b913-fad6-49b5-a875-5706a6c8ba35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15fedb616eaa751aa2a5c31c31f1a07279410507da5491a9a1e6d37032d0c09f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dx6ll" podUID="5783b913-fad6-49b5-a875-5706a6c8ba35" Sep 10 06:56:16.437517 containerd[1582]: time="2025-09-10T06:56:16.437427178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cqtk9,Uid:1c0c2c0d-f637-47eb-bc16-5e32512b9b12,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fb5be58fa63fe4935d88af489219130a71199d19898a973480b2f480407700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.458479 kubelet[2913]: E0910 06:56:16.458344 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fb5be58fa63fe4935d88af489219130a71199d19898a973480b2f480407700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.458479 kubelet[2913]: E0910 06:56:16.458443 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fb5be58fa63fe4935d88af489219130a71199d19898a973480b2f480407700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cqtk9" Sep 10 06:56:16.458479 kubelet[2913]: E0910 06:56:16.458479 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43fb5be58fa63fe4935d88af489219130a71199d19898a973480b2f480407700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cqtk9" Sep 10 06:56:16.459642 kubelet[2913]: E0910 06:56:16.458554 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cqtk9_calico-system(1c0c2c0d-f637-47eb-bc16-5e32512b9b12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cqtk9_calico-system(1c0c2c0d-f637-47eb-bc16-5e32512b9b12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43fb5be58fa63fe4935d88af489219130a71199d19898a973480b2f480407700\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cqtk9" podUID="1c0c2c0d-f637-47eb-bc16-5e32512b9b12" Sep 10 06:56:16.469928 containerd[1582]: time="2025-09-10T06:56:16.469742494Z" level=error msg="Failed to destroy network for sandbox \"2f51d76a88574301e01c5d25f6189bf3b1acb4fa2fe6b8e044328e4ecf554979\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.471758 containerd[1582]: time="2025-09-10T06:56:16.471720710Z" level=error msg="Failed to destroy network for sandbox \"d37464c82d342d5c78565e8a8700f2e6242b6d9382562d44be5a2f525dad54fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.478486 containerd[1582]: time="2025-09-10T06:56:16.478228416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-scp8z,Uid:4b7949e4-64e9-4bb8-bda2-569ab0826025,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f51d76a88574301e01c5d25f6189bf3b1acb4fa2fe6b8e044328e4ecf554979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.480491 containerd[1582]: time="2025-09-10T06:56:16.480226165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6578f45564-7zsdk,Uid:15799f56-9106-476a-802d-a16e19297b9b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d37464c82d342d5c78565e8a8700f2e6242b6d9382562d44be5a2f525dad54fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.481737 kubelet[2913]: E0910 06:56:16.481677 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f51d76a88574301e01c5d25f6189bf3b1acb4fa2fe6b8e044328e4ecf554979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.484258 kubelet[2913]: E0910 06:56:16.481765 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f51d76a88574301e01c5d25f6189bf3b1acb4fa2fe6b8e044328e4ecf554979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-scp8z" Sep 10 06:56:16.484258 kubelet[2913]: E0910 06:56:16.481810 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f51d76a88574301e01c5d25f6189bf3b1acb4fa2fe6b8e044328e4ecf554979\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-scp8z" Sep 10 06:56:16.484258 kubelet[2913]: E0910 06:56:16.481879 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-scp8z_kube-system(4b7949e4-64e9-4bb8-bda2-569ab0826025)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-scp8z_kube-system(4b7949e4-64e9-4bb8-bda2-569ab0826025)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f51d76a88574301e01c5d25f6189bf3b1acb4fa2fe6b8e044328e4ecf554979\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-scp8z" podUID="4b7949e4-64e9-4bb8-bda2-569ab0826025" Sep 10 06:56:16.484762 kubelet[2913]: E0910 06:56:16.482348 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d37464c82d342d5c78565e8a8700f2e6242b6d9382562d44be5a2f525dad54fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.484762 kubelet[2913]: E0910 06:56:16.482528 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d37464c82d342d5c78565e8a8700f2e6242b6d9382562d44be5a2f525dad54fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6578f45564-7zsdk" Sep 10 06:56:16.484762 kubelet[2913]: E0910 06:56:16.482677 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d37464c82d342d5c78565e8a8700f2e6242b6d9382562d44be5a2f525dad54fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6578f45564-7zsdk" Sep 10 06:56:16.485312 kubelet[2913]: E0910 06:56:16.483022 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6578f45564-7zsdk_calico-system(15799f56-9106-476a-802d-a16e19297b9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6578f45564-7zsdk_calico-system(15799f56-9106-476a-802d-a16e19297b9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d37464c82d342d5c78565e8a8700f2e6242b6d9382562d44be5a2f525dad54fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6578f45564-7zsdk" podUID="15799f56-9106-476a-802d-a16e19297b9b" Sep 10 06:56:16.519749 containerd[1582]: time="2025-09-10T06:56:16.519558319Z" level=error msg="Failed to destroy network for sandbox \"21183cc671a0ecb69ef7ae5b9e6b2e8074876a384559365afb209359f21a4e5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.523451 containerd[1582]: time="2025-09-10T06:56:16.523297626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-nqblr,Uid:0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"21183cc671a0ecb69ef7ae5b9e6b2e8074876a384559365afb209359f21a4e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.524563 kubelet[2913]: E0910 06:56:16.523926 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21183cc671a0ecb69ef7ae5b9e6b2e8074876a384559365afb209359f21a4e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.524563 kubelet[2913]: E0910 06:56:16.524037 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21183cc671a0ecb69ef7ae5b9e6b2e8074876a384559365afb209359f21a4e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-nqblr" Sep 10 06:56:16.524563 kubelet[2913]: E0910 06:56:16.524075 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21183cc671a0ecb69ef7ae5b9e6b2e8074876a384559365afb209359f21a4e5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-nqblr" Sep 10 06:56:16.525115 kubelet[2913]: E0910 06:56:16.524179 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7fbdcc55-nqblr_calico-apiserver(0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7fbdcc55-nqblr_calico-apiserver(0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21183cc671a0ecb69ef7ae5b9e6b2e8074876a384559365afb209359f21a4e5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-nqblr" podUID="0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b" Sep 10 06:56:16.529944 containerd[1582]: time="2025-09-10T06:56:16.529776656Z" level=error msg="Failed to destroy network for sandbox \"e609f1d34bbdfc17f60447ac62e9e77f4f464676f14b5652c894b8c169538c8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.531948 containerd[1582]: time="2025-09-10T06:56:16.531883101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-tw5d8,Uid:b5cb7d1c-6f0d-4131-b03d-fe53b52de241,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e609f1d34bbdfc17f60447ac62e9e77f4f464676f14b5652c894b8c169538c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.532401 kubelet[2913]: E0910 06:56:16.532214 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e609f1d34bbdfc17f60447ac62e9e77f4f464676f14b5652c894b8c169538c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.532401 kubelet[2913]: E0910 06:56:16.532287 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e609f1d34bbdfc17f60447ac62e9e77f4f464676f14b5652c894b8c169538c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-tw5d8" Sep 10 06:56:16.532401 kubelet[2913]: E0910 06:56:16.532320 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e609f1d34bbdfc17f60447ac62e9e77f4f464676f14b5652c894b8c169538c8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-tw5d8" Sep 10 06:56:16.533617 kubelet[2913]: E0910 06:56:16.532389 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f7fbdcc55-tw5d8_calico-apiserver(b5cb7d1c-6f0d-4131-b03d-fe53b52de241)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f7fbdcc55-tw5d8_calico-apiserver(b5cb7d1c-6f0d-4131-b03d-fe53b52de241)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e609f1d34bbdfc17f60447ac62e9e77f4f464676f14b5652c894b8c169538c8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-tw5d8" podUID="b5cb7d1c-6f0d-4131-b03d-fe53b52de241" Sep 10 06:56:16.537151 containerd[1582]: time="2025-09-10T06:56:16.536887497Z" level=error msg="Failed to destroy network for sandbox \"d1fcc88e2fdabb2af27814563a6132f9ec00e5a52edc961008fa45cf59bd8dca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.545839 containerd[1582]: time="2025-09-10T06:56:16.545765736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b94rd,Uid:dc916d2a-3aec-44aa-a1d1-e2d88a05a93b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1fcc88e2fdabb2af27814563a6132f9ec00e5a52edc961008fa45cf59bd8dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.546504 kubelet[2913]: E0910 06:56:16.546165 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1fcc88e2fdabb2af27814563a6132f9ec00e5a52edc961008fa45cf59bd8dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.546504 kubelet[2913]: E0910 06:56:16.546329 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1fcc88e2fdabb2af27814563a6132f9ec00e5a52edc961008fa45cf59bd8dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-b94rd" Sep 10 06:56:16.546504 kubelet[2913]: E0910 06:56:16.546367 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1fcc88e2fdabb2af27814563a6132f9ec00e5a52edc961008fa45cf59bd8dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-b94rd" Sep 10 06:56:16.546700 kubelet[2913]: E0910 06:56:16.546443 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-b94rd_calico-system(dc916d2a-3aec-44aa-a1d1-e2d88a05a93b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-b94rd_calico-system(dc916d2a-3aec-44aa-a1d1-e2d88a05a93b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1fcc88e2fdabb2af27814563a6132f9ec00e5a52edc961008fa45cf59bd8dca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-b94rd" podUID="dc916d2a-3aec-44aa-a1d1-e2d88a05a93b" Sep 10 06:56:16.555496 containerd[1582]: time="2025-09-10T06:56:16.555421007Z" level=error msg="Failed to destroy network for sandbox \"d8f5e76bfa510a962a34868d2082b2243307125f0f1232d78992ffd4a177fe20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.556982 containerd[1582]: time="2025-09-10T06:56:16.556939485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5986898b57-mfkdt,Uid:2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f5e76bfa510a962a34868d2082b2243307125f0f1232d78992ffd4a177fe20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.557623 kubelet[2913]: E0910 06:56:16.557350 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f5e76bfa510a962a34868d2082b2243307125f0f1232d78992ffd4a177fe20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:16.557623 kubelet[2913]: E0910 06:56:16.557441 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f5e76bfa510a962a34868d2082b2243307125f0f1232d78992ffd4a177fe20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" Sep 10 06:56:16.557623 kubelet[2913]: E0910 06:56:16.557484 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8f5e76bfa510a962a34868d2082b2243307125f0f1232d78992ffd4a177fe20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" Sep 10 06:56:16.557840 kubelet[2913]: E0910 06:56:16.557560 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5986898b57-mfkdt_calico-system(2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5986898b57-mfkdt_calico-system(2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8f5e76bfa510a962a34868d2082b2243307125f0f1232d78992ffd4a177fe20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" podUID="2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86" Sep 10 06:56:26.921660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640967178.mount: Deactivated successfully. Sep 10 06:56:26.997218 containerd[1582]: time="2025-09-10T06:56:26.983454491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:27.005450 containerd[1582]: time="2025-09-10T06:56:27.005357344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 10 06:56:27.021884 containerd[1582]: time="2025-09-10T06:56:27.021748136Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:27.024149 containerd[1582]: time="2025-09-10T06:56:27.023439337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:27.028275 containerd[1582]: time="2025-09-10T06:56:27.028115026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.935234387s" Sep 10 06:56:27.028275 containerd[1582]: time="2025-09-10T06:56:27.028232832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 10 06:56:27.066940 containerd[1582]: time="2025-09-10T06:56:27.066570761Z" level=info msg="CreateContainer within sandbox \"d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 06:56:27.176478 containerd[1582]: time="2025-09-10T06:56:27.175901805Z" level=info msg="Container 2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:27.181507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073597235.mount: Deactivated successfully. Sep 10 06:56:27.275846 containerd[1582]: time="2025-09-10T06:56:27.275705327Z" level=info msg="CreateContainer within sandbox \"d1fe8d0e1ae42c152bb6a729983a55b76e3cc7b658747305a40e748cda90901d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\"" Sep 10 06:56:27.278321 containerd[1582]: time="2025-09-10T06:56:27.277776609Z" level=info msg="StartContainer for \"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\"" Sep 10 06:56:27.305953 containerd[1582]: time="2025-09-10T06:56:27.305869253Z" level=info msg="connecting to shim 2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223" address="unix:///run/containerd/s/d74186429200248dc0a7bfcbcfd02e16293758b9e301b00122f07fd69c1a0ea2" protocol=ttrpc version=3 Sep 10 06:56:27.444573 systemd[1]: Started cri-containerd-2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223.scope - libcontainer container 2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223. Sep 10 06:56:27.637966 containerd[1582]: time="2025-09-10T06:56:27.637738344Z" level=info msg="StartContainer for \"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" returns successfully" Sep 10 06:56:27.737131 containerd[1582]: time="2025-09-10T06:56:27.736958763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5986898b57-mfkdt,Uid:2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:27.738050 containerd[1582]: time="2025-09-10T06:56:27.738007973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6578f45564-7zsdk,Uid:15799f56-9106-476a-802d-a16e19297b9b,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:27.865786 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 06:56:27.867275 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 06:56:27.940225 containerd[1582]: time="2025-09-10T06:56:27.940107521Z" level=error msg="Failed to destroy network for sandbox \"efbf0c8d2e260a00a689c826487bdde1f33b930b6d258ae0e894acbdb5a24afe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:27.944987 systemd[1]: run-netns-cni\x2d826ae6d7\x2d633c\x2d47b1\x2d1223\x2d330de09483b8.mount: Deactivated successfully. Sep 10 06:56:27.961226 containerd[1582]: time="2025-09-10T06:56:27.961111438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5986898b57-mfkdt,Uid:2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efbf0c8d2e260a00a689c826487bdde1f33b930b6d258ae0e894acbdb5a24afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:27.962671 kubelet[2913]: E0910 06:56:27.962447 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efbf0c8d2e260a00a689c826487bdde1f33b930b6d258ae0e894acbdb5a24afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:27.964634 kubelet[2913]: E0910 06:56:27.964376 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efbf0c8d2e260a00a689c826487bdde1f33b930b6d258ae0e894acbdb5a24afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" Sep 10 06:56:27.965729 kubelet[2913]: E0910 06:56:27.964443 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efbf0c8d2e260a00a689c826487bdde1f33b930b6d258ae0e894acbdb5a24afe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" Sep 10 06:56:27.968295 kubelet[2913]: E0910 06:56:27.966269 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5986898b57-mfkdt_calico-system(2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5986898b57-mfkdt_calico-system(2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efbf0c8d2e260a00a689c826487bdde1f33b930b6d258ae0e894acbdb5a24afe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" podUID="2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86" Sep 10 06:56:27.978468 containerd[1582]: time="2025-09-10T06:56:27.978382529Z" level=error msg="Failed to destroy network for sandbox \"5cd34a95e8408cb72e682334ec10b7bad5994fa76da434da1a20e29170aa51c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:27.982271 containerd[1582]: time="2025-09-10T06:56:27.982019292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6578f45564-7zsdk,Uid:15799f56-9106-476a-802d-a16e19297b9b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd34a95e8408cb72e682334ec10b7bad5994fa76da434da1a20e29170aa51c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:27.983086 systemd[1]: run-netns-cni\x2d56189230\x2dd669\x2da12d\x2d9fe5\x2dc5b3dcc7c5e0.mount: Deactivated successfully. Sep 10 06:56:27.984599 kubelet[2913]: E0910 06:56:27.983676 2913 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd34a95e8408cb72e682334ec10b7bad5994fa76da434da1a20e29170aa51c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 06:56:27.984599 kubelet[2913]: E0910 06:56:27.983809 2913 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd34a95e8408cb72e682334ec10b7bad5994fa76da434da1a20e29170aa51c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6578f45564-7zsdk" Sep 10 06:56:27.984599 kubelet[2913]: E0910 06:56:27.984397 2913 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cd34a95e8408cb72e682334ec10b7bad5994fa76da434da1a20e29170aa51c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6578f45564-7zsdk" Sep 10 06:56:27.985719 kubelet[2913]: E0910 06:56:27.984842 2913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6578f45564-7zsdk_calico-system(15799f56-9106-476a-802d-a16e19297b9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6578f45564-7zsdk_calico-system(15799f56-9106-476a-802d-a16e19297b9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cd34a95e8408cb72e682334ec10b7bad5994fa76da434da1a20e29170aa51c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6578f45564-7zsdk" podUID="15799f56-9106-476a-802d-a16e19297b9b" Sep 10 06:56:28.177399 kubelet[2913]: I0910 06:56:28.176564 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pf8p7" podStartSLOduration=3.104428528 podStartE2EDuration="27.176531244s" podCreationTimestamp="2025-09-10 06:56:01 +0000 UTC" firstStartedPulling="2025-09-10 06:56:02.962579697 +0000 UTC m=+27.496633093" lastFinishedPulling="2025-09-10 06:56:27.034682416 +0000 UTC m=+51.568735809" observedRunningTime="2025-09-10 06:56:28.174083248 +0000 UTC m=+52.708136660" watchObservedRunningTime="2025-09-10 06:56:28.176531244 +0000 UTC m=+52.710584635" Sep 10 06:56:28.312971 kubelet[2913]: I0910 06:56:28.312658 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15799f56-9106-476a-802d-a16e19297b9b-whisker-ca-bundle\") pod \"15799f56-9106-476a-802d-a16e19297b9b\" (UID: \"15799f56-9106-476a-802d-a16e19297b9b\") " Sep 10 06:56:28.312971 kubelet[2913]: I0910 06:56:28.312725 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs49g\" (UniqueName: \"kubernetes.io/projected/15799f56-9106-476a-802d-a16e19297b9b-kube-api-access-rs49g\") pod \"15799f56-9106-476a-802d-a16e19297b9b\" (UID: \"15799f56-9106-476a-802d-a16e19297b9b\") " Sep 10 06:56:28.312971 kubelet[2913]: I0910 06:56:28.312783 2913 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15799f56-9106-476a-802d-a16e19297b9b-whisker-backend-key-pair\") pod \"15799f56-9106-476a-802d-a16e19297b9b\" (UID: \"15799f56-9106-476a-802d-a16e19297b9b\") " Sep 10 06:56:28.319910 kubelet[2913]: I0910 06:56:28.319803 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15799f56-9106-476a-802d-a16e19297b9b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "15799f56-9106-476a-802d-a16e19297b9b" (UID: "15799f56-9106-476a-802d-a16e19297b9b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 06:56:28.324400 kubelet[2913]: I0910 06:56:28.324359 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15799f56-9106-476a-802d-a16e19297b9b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "15799f56-9106-476a-802d-a16e19297b9b" (UID: "15799f56-9106-476a-802d-a16e19297b9b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 06:56:28.327096 kubelet[2913]: I0910 06:56:28.326490 2913 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15799f56-9106-476a-802d-a16e19297b9b-kube-api-access-rs49g" (OuterVolumeSpecName: "kube-api-access-rs49g") pod "15799f56-9106-476a-802d-a16e19297b9b" (UID: "15799f56-9106-476a-802d-a16e19297b9b"). InnerVolumeSpecName "kube-api-access-rs49g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 06:56:28.327304 systemd[1]: var-lib-kubelet-pods-15799f56\x2d9106\x2d476a\x2d802d\x2da16e19297b9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drs49g.mount: Deactivated successfully. Sep 10 06:56:28.327507 systemd[1]: var-lib-kubelet-pods-15799f56\x2d9106\x2d476a\x2d802d\x2da16e19297b9b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 06:56:28.414089 kubelet[2913]: I0910 06:56:28.414032 2913 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/15799f56-9106-476a-802d-a16e19297b9b-whisker-backend-key-pair\") on node \"srv-fpwqg.gb1.brightbox.com\" DevicePath \"\"" Sep 10 06:56:28.414089 kubelet[2913]: I0910 06:56:28.414080 2913 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15799f56-9106-476a-802d-a16e19297b9b-whisker-ca-bundle\") on node \"srv-fpwqg.gb1.brightbox.com\" DevicePath \"\"" Sep 10 06:56:28.414391 kubelet[2913]: I0910 06:56:28.414098 2913 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rs49g\" (UniqueName: \"kubernetes.io/projected/15799f56-9106-476a-802d-a16e19297b9b-kube-api-access-rs49g\") on node \"srv-fpwqg.gb1.brightbox.com\" DevicePath \"\"" Sep 10 06:56:28.517174 containerd[1582]: time="2025-09-10T06:56:28.517083682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" id:\"243da1846e46598d3ea9cda3ceee543376c01029113b3454f8adbfd3c9e8168f\" pid:3988 exit_status:1 exited_at:{seconds:1757487388 nanos:506751778}" Sep 10 06:56:28.731745 containerd[1582]: time="2025-09-10T06:56:28.731536989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b94rd,Uid:dc916d2a-3aec-44aa-a1d1-e2d88a05a93b,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:29.156920 systemd[1]: Removed slice kubepods-besteffort-pod15799f56_9106_476a_802d_a16e19297b9b.slice - libcontainer container kubepods-besteffort-pod15799f56_9106_476a_802d_a16e19297b9b.slice. Sep 10 06:56:29.251427 systemd-networkd[1510]: calic9a3f4ada76: Link UP Sep 10 06:56:29.253045 systemd-networkd[1510]: calic9a3f4ada76: Gained carrier Sep 10 06:56:29.309286 containerd[1582]: 2025-09-10 06:56:28.813 [INFO][4010] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 06:56:29.309286 containerd[1582]: 2025-09-10 06:56:28.864 [INFO][4010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0 goldmane-54d579b49d- calico-system dc916d2a-3aec-44aa-a1d1-e2d88a05a93b 855 0 2025-09-10 06:56:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com goldmane-54d579b49d-b94rd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic9a3f4ada76 [] [] }} ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-" Sep 10 06:56:29.309286 containerd[1582]: 2025-09-10 06:56:28.864 [INFO][4010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" Sep 10 06:56:29.309286 containerd[1582]: 2025-09-10 06:56:29.087 [INFO][4026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" HandleID="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Workload="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.090 [INFO][4026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" HandleID="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Workload="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f9450), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"goldmane-54d579b49d-b94rd", "timestamp":"2025-09-10 06:56:29.087429953 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.093 [INFO][4026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.097 [INFO][4026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.097 [INFO][4026] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.120 [INFO][4026] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.133 [INFO][4026] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.140 [INFO][4026] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.143 [INFO][4026] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.312209 containerd[1582]: 2025-09-10 06:56:29.152 [INFO][4026] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.313962 containerd[1582]: 2025-09-10 06:56:29.152 [INFO][4026] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.313962 containerd[1582]: 2025-09-10 06:56:29.164 [INFO][4026] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e Sep 10 06:56:29.313962 containerd[1582]: 2025-09-10 06:56:29.183 [INFO][4026] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.313962 containerd[1582]: 2025-09-10 06:56:29.209 [INFO][4026] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.193/26] block=192.168.79.192/26 handle="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.313962 containerd[1582]: 2025-09-10 06:56:29.210 [INFO][4026] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.193/26] handle="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:29.313962 containerd[1582]: 2025-09-10 06:56:29.213 [INFO][4026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:29.313962 containerd[1582]: 2025-09-10 06:56:29.213 [INFO][4026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.193/26] IPv6=[] ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" HandleID="k8s-pod-network.5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Workload="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" Sep 10 06:56:29.314294 containerd[1582]: 2025-09-10 06:56:29.219 [INFO][4010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"dc916d2a-3aec-44aa-a1d1-e2d88a05a93b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-54d579b49d-b94rd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9a3f4ada76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:29.314404 containerd[1582]: 2025-09-10 06:56:29.220 [INFO][4010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.193/32] ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" Sep 10 06:56:29.314404 containerd[1582]: 2025-09-10 06:56:29.220 [INFO][4010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9a3f4ada76 ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" Sep 10 06:56:29.314404 containerd[1582]: 2025-09-10 06:56:29.252 [INFO][4010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" Sep 10 06:56:29.314506 containerd[1582]: 2025-09-10 06:56:29.258 [INFO][4010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"dc916d2a-3aec-44aa-a1d1-e2d88a05a93b", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e", Pod:"goldmane-54d579b49d-b94rd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.79.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9a3f4ada76", MAC:"76:ce:14:76:8c:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:29.314604 containerd[1582]: 2025-09-10 06:56:29.302 [INFO][4010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" Namespace="calico-system" Pod="goldmane-54d579b49d-b94rd" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-goldmane--54d579b49d--b94rd-eth0" Sep 10 06:56:29.378308 systemd[1]: Created slice kubepods-besteffort-pod94eee6d9_4326_42ea_bdcb_f15f04919434.slice - libcontainer container kubepods-besteffort-pod94eee6d9_4326_42ea_bdcb_f15f04919434.slice. Sep 10 06:56:29.428316 containerd[1582]: time="2025-09-10T06:56:29.426886678Z" level=info msg="connecting to shim 5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e" address="unix:///run/containerd/s/79ed34fa78aad084f8c29457c0cbbcfaed0a409a6387457c9544b378d4e85f39" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:29.523077 systemd[1]: Started cri-containerd-5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e.scope - libcontainer container 5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e. Sep 10 06:56:29.529261 kubelet[2913]: I0910 06:56:29.527081 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/94eee6d9-4326-42ea-bdcb-f15f04919434-whisker-ca-bundle\") pod \"whisker-6bb584cc4c-ptsbs\" (UID: \"94eee6d9-4326-42ea-bdcb-f15f04919434\") " pod="calico-system/whisker-6bb584cc4c-ptsbs" Sep 10 06:56:29.529261 kubelet[2913]: I0910 06:56:29.527156 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt98b\" (UniqueName: \"kubernetes.io/projected/94eee6d9-4326-42ea-bdcb-f15f04919434-kube-api-access-qt98b\") pod \"whisker-6bb584cc4c-ptsbs\" (UID: \"94eee6d9-4326-42ea-bdcb-f15f04919434\") " pod="calico-system/whisker-6bb584cc4c-ptsbs" Sep 10 06:56:29.530555 kubelet[2913]: I0910 06:56:29.529953 2913 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/94eee6d9-4326-42ea-bdcb-f15f04919434-whisker-backend-key-pair\") pod \"whisker-6bb584cc4c-ptsbs\" (UID: \"94eee6d9-4326-42ea-bdcb-f15f04919434\") " pod="calico-system/whisker-6bb584cc4c-ptsbs" Sep 10 06:56:29.699953 containerd[1582]: time="2025-09-10T06:56:29.699670845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb584cc4c-ptsbs,Uid:94eee6d9-4326-42ea-bdcb-f15f04919434,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:29.738211 containerd[1582]: time="2025-09-10T06:56:29.737889580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dx6ll,Uid:5783b913-fad6-49b5-a875-5706a6c8ba35,Namespace:kube-system,Attempt:0,}" Sep 10 06:56:29.739678 containerd[1582]: time="2025-09-10T06:56:29.739591621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-scp8z,Uid:4b7949e4-64e9-4bb8-bda2-569ab0826025,Namespace:kube-system,Attempt:0,}" Sep 10 06:56:29.740931 kubelet[2913]: I0910 06:56:29.740888 2913 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15799f56-9106-476a-802d-a16e19297b9b" path="/var/lib/kubelet/pods/15799f56-9106-476a-802d-a16e19297b9b/volumes" Sep 10 06:56:29.805594 containerd[1582]: time="2025-09-10T06:56:29.805453429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" id:\"3c522c5859ec28555c44459f6b582f732c017ff2ef91ad87dbd28d7595a7f6c8\" pid:4050 exit_status:1 exited_at:{seconds:1757487389 nanos:804751148}" Sep 10 06:56:29.860219 containerd[1582]: time="2025-09-10T06:56:29.860095436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b94rd,Uid:dc916d2a-3aec-44aa-a1d1-e2d88a05a93b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e\"" Sep 10 06:56:29.868214 containerd[1582]: time="2025-09-10T06:56:29.868040174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 06:56:30.233057 systemd-networkd[1510]: cali89465a2298e: Link UP Sep 10 06:56:30.234060 systemd-networkd[1510]: cali89465a2298e: Gained carrier Sep 10 06:56:30.272855 containerd[1582]: 2025-09-10 06:56:29.832 [INFO][4128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 06:56:30.272855 containerd[1582]: 2025-09-10 06:56:29.871 [INFO][4128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0 coredns-668d6bf9bc- kube-system 4b7949e4-64e9-4bb8-bda2-569ab0826025 852 0 2025-09-10 06:55:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com coredns-668d6bf9bc-scp8z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89465a2298e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-" Sep 10 06:56:30.272855 containerd[1582]: 2025-09-10 06:56:29.871 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" Sep 10 06:56:30.272855 containerd[1582]: 2025-09-10 06:56:30.091 [INFO][4149] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" HandleID="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Workload="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.095 [INFO][4149] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" HandleID="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Workload="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-scp8z", "timestamp":"2025-09-10 06:56:30.089299543 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.096 [INFO][4149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.096 [INFO][4149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.096 [INFO][4149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.119 [INFO][4149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.137 [INFO][4149] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.155 [INFO][4149] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.164 [INFO][4149] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.273329 containerd[1582]: 2025-09-10 06:56:30.174 [INFO][4149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.276274 containerd[1582]: 2025-09-10 06:56:30.175 [INFO][4149] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.276274 containerd[1582]: 2025-09-10 06:56:30.182 [INFO][4149] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f Sep 10 06:56:30.276274 containerd[1582]: 2025-09-10 06:56:30.193 [INFO][4149] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.276274 containerd[1582]: 2025-09-10 06:56:30.211 [INFO][4149] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.194/26] block=192.168.79.192/26 handle="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.276274 containerd[1582]: 2025-09-10 06:56:30.211 [INFO][4149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.194/26] handle="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.276274 containerd[1582]: 2025-09-10 06:56:30.212 [INFO][4149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:30.276274 containerd[1582]: 2025-09-10 06:56:30.212 [INFO][4149] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.194/26] IPv6=[] ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" HandleID="k8s-pod-network.052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Workload="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" Sep 10 06:56:30.276577 containerd[1582]: 2025-09-10 06:56:30.227 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4b7949e4-64e9-4bb8-bda2-569ab0826025", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-scp8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89465a2298e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:30.276577 containerd[1582]: 2025-09-10 06:56:30.227 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.194/32] ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" Sep 10 06:56:30.276577 containerd[1582]: 2025-09-10 06:56:30.227 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89465a2298e ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" Sep 10 06:56:30.276577 containerd[1582]: 2025-09-10 06:56:30.231 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" Sep 10 06:56:30.276577 containerd[1582]: 2025-09-10 06:56:30.232 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4b7949e4-64e9-4bb8-bda2-569ab0826025", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f", Pod:"coredns-668d6bf9bc-scp8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89465a2298e", MAC:"3e:5a:33:58:83:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:30.276577 containerd[1582]: 2025-09-10 06:56:30.266 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" Namespace="kube-system" Pod="coredns-668d6bf9bc-scp8z" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--scp8z-eth0" Sep 10 06:56:30.354241 containerd[1582]: time="2025-09-10T06:56:30.354040314Z" level=info msg="connecting to shim 052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f" address="unix:///run/containerd/s/07974f2c227280fa0d060fb305dad92c9ec3bc3a0e8ae30afb67c93728e2bdcf" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:30.377094 systemd-networkd[1510]: cali4c32b0a6927: Link UP Sep 10 06:56:30.378353 systemd-networkd[1510]: cali4c32b0a6927: Gained carrier Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:29.848 [INFO][4109] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:29.900 [INFO][4109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0 whisker-6bb584cc4c- calico-system 94eee6d9-4326-42ea-bdcb-f15f04919434 936 0 2025-09-10 06:56:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bb584cc4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com whisker-6bb584cc4c-ptsbs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4c32b0a6927 [] [] }} ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:29.900 [INFO][4109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.101 [INFO][4156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" HandleID="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Workload="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.105 [INFO][4156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" HandleID="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Workload="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004faa0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"whisker-6bb584cc4c-ptsbs", "timestamp":"2025-09-10 06:56:30.101812752 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.106 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.211 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.212 [INFO][4156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.232 [INFO][4156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.257 [INFO][4156] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.288 [INFO][4156] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.296 [INFO][4156] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.302 [INFO][4156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.303 [INFO][4156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.318 [INFO][4156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7 Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.333 [INFO][4156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.350 [INFO][4156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.195/26] block=192.168.79.192/26 handle="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.353 [INFO][4156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.195/26] handle="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.353 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:30.445214 containerd[1582]: 2025-09-10 06:56:30.353 [INFO][4156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.195/26] IPv6=[] ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" HandleID="k8s-pod-network.8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Workload="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" Sep 10 06:56:30.448480 containerd[1582]: 2025-09-10 06:56:30.364 [INFO][4109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0", GenerateName:"whisker-6bb584cc4c-", Namespace:"calico-system", SelfLink:"", UID:"94eee6d9-4326-42ea-bdcb-f15f04919434", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb584cc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6bb584cc4c-ptsbs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c32b0a6927", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:30.448480 containerd[1582]: 2025-09-10 06:56:30.365 [INFO][4109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.195/32] ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" Sep 10 06:56:30.448480 containerd[1582]: 2025-09-10 06:56:30.365 [INFO][4109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c32b0a6927 ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" Sep 10 06:56:30.448480 containerd[1582]: 2025-09-10 06:56:30.384 [INFO][4109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" Sep 10 06:56:30.448480 containerd[1582]: 2025-09-10 06:56:30.395 [INFO][4109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0", GenerateName:"whisker-6bb584cc4c-", Namespace:"calico-system", SelfLink:"", UID:"94eee6d9-4326-42ea-bdcb-f15f04919434", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb584cc4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7", Pod:"whisker-6bb584cc4c-ptsbs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.79.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4c32b0a6927", MAC:"ce:e8:45:e3:14:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:30.448480 containerd[1582]: 2025-09-10 06:56:30.436 [INFO][4109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" Namespace="calico-system" Pod="whisker-6bb584cc4c-ptsbs" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-whisker--6bb584cc4c--ptsbs-eth0" Sep 10 06:56:30.486459 systemd[1]: Started cri-containerd-052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f.scope - libcontainer container 052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f. Sep 10 06:56:30.567624 systemd-networkd[1510]: calib92c5065181: Link UP Sep 10 06:56:30.571475 systemd-networkd[1510]: calib92c5065181: Gained carrier Sep 10 06:56:30.591921 containerd[1582]: time="2025-09-10T06:56:30.591860549Z" level=info msg="connecting to shim 8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7" address="unix:///run/containerd/s/17b13c2084359db857ee1a78a60b6da527b4c4daf786d09a509720915ca895a7" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:29.889 [INFO][4119] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:29.933 [INFO][4119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0 coredns-668d6bf9bc- kube-system 5783b913-fad6-49b5-a875-5706a6c8ba35 845 0 2025-09-10 06:55:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com coredns-668d6bf9bc-dx6ll eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib92c5065181 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:29.936 [INFO][4119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.124 [INFO][4173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" HandleID="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Workload="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.125 [INFO][4173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" HandleID="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Workload="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003697c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-dx6ll", "timestamp":"2025-09-10 06:56:30.124159578 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.126 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.354 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.355 [INFO][4173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.402 [INFO][4173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.417 [INFO][4173] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.437 [INFO][4173] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.451 [INFO][4173] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.466 [INFO][4173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.466 [INFO][4173] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.478 [INFO][4173] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94 Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.492 [INFO][4173] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.537 [INFO][4173] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.196/26] block=192.168.79.192/26 handle="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.540 [INFO][4173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.196/26] handle="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.541 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:30.621341 containerd[1582]: 2025-09-10 06:56:30.542 [INFO][4173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.196/26] IPv6=[] ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" HandleID="k8s-pod-network.038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Workload="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" Sep 10 06:56:30.623766 containerd[1582]: 2025-09-10 06:56:30.556 [INFO][4119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5783b913-fad6-49b5-a875-5706a6c8ba35", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-dx6ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib92c5065181", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:30.623766 containerd[1582]: 2025-09-10 06:56:30.558 [INFO][4119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.196/32] ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" Sep 10 06:56:30.623766 containerd[1582]: 2025-09-10 06:56:30.558 [INFO][4119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib92c5065181 ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" Sep 10 06:56:30.623766 containerd[1582]: 2025-09-10 06:56:30.567 [INFO][4119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" Sep 10 06:56:30.623766 containerd[1582]: 2025-09-10 06:56:30.567 [INFO][4119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5783b913-fad6-49b5-a875-5706a6c8ba35", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94", Pod:"coredns-668d6bf9bc-dx6ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.79.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib92c5065181", MAC:"2a:b2:94:fe:95:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:30.623766 containerd[1582]: 2025-09-10 06:56:30.598 [INFO][4119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" Namespace="kube-system" Pod="coredns-668d6bf9bc-dx6ll" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-coredns--668d6bf9bc--dx6ll-eth0" Sep 10 06:56:30.651135 systemd[1]: Started cri-containerd-8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7.scope - libcontainer container 8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7. Sep 10 06:56:30.708546 containerd[1582]: time="2025-09-10T06:56:30.708468421Z" level=info msg="connecting to shim 038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94" address="unix:///run/containerd/s/1f129ab7b82d7f44c7879ce282bfe7e9e705e5e58eac10a0ce39ccfa6e8dd329" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:30.719341 systemd-networkd[1510]: calic9a3f4ada76: Gained IPv6LL Sep 10 06:56:30.753046 containerd[1582]: time="2025-09-10T06:56:30.752466265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-nqblr,Uid:0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b,Namespace:calico-apiserver,Attempt:0,}" Sep 10 06:56:30.756057 containerd[1582]: time="2025-09-10T06:56:30.755880221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-tw5d8,Uid:b5cb7d1c-6f0d-4131-b03d-fe53b52de241,Namespace:calico-apiserver,Attempt:0,}" Sep 10 06:56:30.757215 containerd[1582]: time="2025-09-10T06:56:30.756314670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cqtk9,Uid:1c0c2c0d-f637-47eb-bc16-5e32512b9b12,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:30.775969 containerd[1582]: time="2025-09-10T06:56:30.775890972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-scp8z,Uid:4b7949e4-64e9-4bb8-bda2-569ab0826025,Namespace:kube-system,Attempt:0,} returns sandbox id \"052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f\"" Sep 10 06:56:30.798944 containerd[1582]: time="2025-09-10T06:56:30.798671847Z" level=info msg="CreateContainer within sandbox \"052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 06:56:30.852439 systemd[1]: Started cri-containerd-038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94.scope - libcontainer container 038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94. Sep 10 06:56:30.921220 containerd[1582]: time="2025-09-10T06:56:30.919514604Z" level=info msg="Container 283e1929b8e2b2f02d361abc3180aa4fa8ef5d70387014ff11a8bbbb77d860ee: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:30.960802 containerd[1582]: time="2025-09-10T06:56:30.960624359Z" level=info msg="CreateContainer within sandbox \"052a8b691dddf14f8315b46792daf1d88d73bed42905dba320c9c6c3993c682f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"283e1929b8e2b2f02d361abc3180aa4fa8ef5d70387014ff11a8bbbb77d860ee\"" Sep 10 06:56:30.964384 containerd[1582]: time="2025-09-10T06:56:30.964347701Z" level=info msg="StartContainer for \"283e1929b8e2b2f02d361abc3180aa4fa8ef5d70387014ff11a8bbbb77d860ee\"" Sep 10 06:56:30.967787 containerd[1582]: time="2025-09-10T06:56:30.967696171Z" level=info msg="connecting to shim 283e1929b8e2b2f02d361abc3180aa4fa8ef5d70387014ff11a8bbbb77d860ee" address="unix:///run/containerd/s/07974f2c227280fa0d060fb305dad92c9ec3bc3a0e8ae30afb67c93728e2bdcf" protocol=ttrpc version=3 Sep 10 06:56:31.065482 containerd[1582]: time="2025-09-10T06:56:31.064060046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dx6ll,Uid:5783b913-fad6-49b5-a875-5706a6c8ba35,Namespace:kube-system,Attempt:0,} returns sandbox id \"038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94\"" Sep 10 06:56:31.079335 containerd[1582]: time="2025-09-10T06:56:31.079230640Z" level=info msg="CreateContainer within sandbox \"038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 06:56:31.119955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568410223.mount: Deactivated successfully. Sep 10 06:56:31.132147 containerd[1582]: time="2025-09-10T06:56:31.131380639Z" level=info msg="Container bb361bbb7a18d432554f6d5048cee497a3631359c814687710f2c4e590101f43: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:31.149819 containerd[1582]: time="2025-09-10T06:56:31.149742619Z" level=info msg="CreateContainer within sandbox \"038032b659ddeb40d412afeddf4169ba287cbc629eee97b5bc870079ddb55f94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb361bbb7a18d432554f6d5048cee497a3631359c814687710f2c4e590101f43\"" Sep 10 06:56:31.150932 containerd[1582]: time="2025-09-10T06:56:31.150900765Z" level=info msg="StartContainer for \"bb361bbb7a18d432554f6d5048cee497a3631359c814687710f2c4e590101f43\"" Sep 10 06:56:31.162939 containerd[1582]: time="2025-09-10T06:56:31.162882822Z" level=info msg="connecting to shim bb361bbb7a18d432554f6d5048cee497a3631359c814687710f2c4e590101f43" address="unix:///run/containerd/s/1f129ab7b82d7f44c7879ce282bfe7e9e705e5e58eac10a0ce39ccfa6e8dd329" protocol=ttrpc version=3 Sep 10 06:56:31.195416 systemd[1]: Started cri-containerd-283e1929b8e2b2f02d361abc3180aa4fa8ef5d70387014ff11a8bbbb77d860ee.scope - libcontainer container 283e1929b8e2b2f02d361abc3180aa4fa8ef5d70387014ff11a8bbbb77d860ee. Sep 10 06:56:31.360166 containerd[1582]: time="2025-09-10T06:56:31.359857976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb584cc4c-ptsbs,Uid:94eee6d9-4326-42ea-bdcb-f15f04919434,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7\"" Sep 10 06:56:31.380077 systemd[1]: Started cri-containerd-bb361bbb7a18d432554f6d5048cee497a3631359c814687710f2c4e590101f43.scope - libcontainer container bb361bbb7a18d432554f6d5048cee497a3631359c814687710f2c4e590101f43. Sep 10 06:56:31.510468 systemd-networkd[1510]: cali6b402aadcb3: Link UP Sep 10 06:56:31.523297 systemd-networkd[1510]: cali6b402aadcb3: Gained carrier Sep 10 06:56:31.610423 systemd-networkd[1510]: cali89465a2298e: Gained IPv6LL Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:30.950 [INFO][4416] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.002 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0 csi-node-driver- calico-system 1c0c2c0d-f637-47eb-bc16-5e32512b9b12 729 0 2025-09-10 06:56:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com csi-node-driver-cqtk9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6b402aadcb3 [] [] }} ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.003 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.173 [INFO][4463] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" HandleID="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Workload="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.174 [INFO][4463] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" HandleID="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Workload="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a17e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"csi-node-driver-cqtk9", "timestamp":"2025-09-10 06:56:31.173618778 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.174 [INFO][4463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.174 [INFO][4463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.174 [INFO][4463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.205 [INFO][4463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.260 [INFO][4463] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.278 [INFO][4463] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.295 [INFO][4463] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.310 [INFO][4463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.311 [INFO][4463] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.329 [INFO][4463] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00 Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.383 [INFO][4463] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.418 [INFO][4463] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.197/26] block=192.168.79.192/26 handle="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.418 [INFO][4463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.197/26] handle="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.418 [INFO][4463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:31.611580 containerd[1582]: 2025-09-10 06:56:31.418 [INFO][4463] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.197/26] IPv6=[] ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" HandleID="k8s-pod-network.9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Workload="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" Sep 10 06:56:31.610918 systemd-networkd[1510]: cali4c32b0a6927: Gained IPv6LL Sep 10 06:56:31.612889 containerd[1582]: 2025-09-10 06:56:31.443 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c0c2c0d-f637-47eb-bc16-5e32512b9b12", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-cqtk9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b402aadcb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:31.612889 containerd[1582]: 2025-09-10 06:56:31.447 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.197/32] ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" Sep 10 06:56:31.612889 containerd[1582]: 2025-09-10 06:56:31.447 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b402aadcb3 ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" Sep 10 06:56:31.612889 containerd[1582]: 2025-09-10 06:56:31.524 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" Sep 10 06:56:31.612889 containerd[1582]: 2025-09-10 06:56:31.536 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1c0c2c0d-f637-47eb-bc16-5e32512b9b12", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00", Pod:"csi-node-driver-cqtk9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.79.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b402aadcb3", MAC:"da:bf:84:60:fa:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:31.612889 containerd[1582]: 2025-09-10 06:56:31.566 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" Namespace="calico-system" Pod="csi-node-driver-cqtk9" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-csi--node--driver--cqtk9-eth0" Sep 10 06:56:31.617634 containerd[1582]: time="2025-09-10T06:56:31.616782230Z" level=info msg="StartContainer for \"283e1929b8e2b2f02d361abc3180aa4fa8ef5d70387014ff11a8bbbb77d860ee\" returns successfully" Sep 10 06:56:31.718413 containerd[1582]: time="2025-09-10T06:56:31.717865000Z" level=info msg="StartContainer for \"bb361bbb7a18d432554f6d5048cee497a3631359c814687710f2c4e590101f43\" returns successfully" Sep 10 06:56:31.790158 containerd[1582]: time="2025-09-10T06:56:31.789157808Z" level=info msg="connecting to shim 9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00" address="unix:///run/containerd/s/fbbf927e4e46adb6f9794fa08effac9ae677f9324eb6a583fe998b38fa7d7714" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:31.840176 systemd-networkd[1510]: cali95ba3c749bb: Link UP Sep 10 06:56:31.847295 systemd-networkd[1510]: cali95ba3c749bb: Gained carrier Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.232 [INFO][4412] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.354 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0 calico-apiserver-6f7fbdcc55- calico-apiserver 0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b 853 0 2025-09-10 06:55:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f7fbdcc55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com calico-apiserver-6f7fbdcc55-nqblr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali95ba3c749bb [] [] }} ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.354 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.514 [INFO][4535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" HandleID="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.514 [INFO][4535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" HandleID="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032c140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"calico-apiserver-6f7fbdcc55-nqblr", "timestamp":"2025-09-10 06:56:31.514609138 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.519 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.519 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.519 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.561 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.601 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.670 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.675 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.686 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.687 [INFO][4535] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.698 [INFO][4535] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.720 [INFO][4535] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.773 [INFO][4535] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.198/26] block=192.168.79.192/26 handle="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.773 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.198/26] handle="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.773 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:31.951052 containerd[1582]: 2025-09-10 06:56:31.773 [INFO][4535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.198/26] IPv6=[] ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" HandleID="k8s-pod-network.159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" Sep 10 06:56:31.956103 containerd[1582]: 2025-09-10 06:56:31.780 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0", GenerateName:"calico-apiserver-6f7fbdcc55-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7fbdcc55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6f7fbdcc55-nqblr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95ba3c749bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:31.956103 containerd[1582]: 2025-09-10 06:56:31.780 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.198/32] ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" Sep 10 06:56:31.956103 containerd[1582]: 2025-09-10 06:56:31.780 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95ba3c749bb ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" Sep 10 06:56:31.956103 containerd[1582]: 2025-09-10 06:56:31.848 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" Sep 10 06:56:31.956103 containerd[1582]: 2025-09-10 06:56:31.850 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0", GenerateName:"calico-apiserver-6f7fbdcc55-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7fbdcc55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc", Pod:"calico-apiserver-6f7fbdcc55-nqblr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali95ba3c749bb", MAC:"06:e0:f3:f2:a7:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:31.956103 containerd[1582]: 2025-09-10 06:56:31.899 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-nqblr" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--nqblr-eth0" Sep 10 06:56:31.966685 systemd[1]: Started cri-containerd-9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00.scope - libcontainer container 9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00. Sep 10 06:56:32.121954 systemd-networkd[1510]: calib92c5065181: Gained IPv6LL Sep 10 06:56:32.126292 systemd-networkd[1510]: cali535cdc8ec33: Link UP Sep 10 06:56:32.128922 systemd-networkd[1510]: cali535cdc8ec33: Gained carrier Sep 10 06:56:32.139075 containerd[1582]: time="2025-09-10T06:56:32.138945895Z" level=info msg="connecting to shim 159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc" address="unix:///run/containerd/s/c5da5ab67c6da680267ea09714c993f934f0453b22c16bf4f8a144e996dadc01" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.254 [INFO][4429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.374 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0 calico-apiserver-6f7fbdcc55- calico-apiserver b5cb7d1c-6f0d-4131-b03d-fe53b52de241 851 0 2025-09-10 06:55:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f7fbdcc55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com calico-apiserver-6f7fbdcc55-tw5d8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali535cdc8ec33 [] [] }} ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.374 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.844 [INFO][4532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" HandleID="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.855 [INFO][4532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" HandleID="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000394bc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"calico-apiserver-6f7fbdcc55-tw5d8", "timestamp":"2025-09-10 06:56:31.844462864 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.855 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.855 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.855 [INFO][4532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.909 [INFO][4532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.965 [INFO][4532] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:31.993 [INFO][4532] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.014 [INFO][4532] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.026 [INFO][4532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.026 [INFO][4532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.040 [INFO][4532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.054 [INFO][4532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.099 [INFO][4532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.199/26] block=192.168.79.192/26 handle="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.100 [INFO][4532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.199/26] handle="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.100 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:32.240573 containerd[1582]: 2025-09-10 06:56:32.100 [INFO][4532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.199/26] IPv6=[] ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" HandleID="k8s-pod-network.882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" Sep 10 06:56:32.244648 containerd[1582]: 2025-09-10 06:56:32.112 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0", GenerateName:"calico-apiserver-6f7fbdcc55-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5cb7d1c-6f0d-4131-b03d-fe53b52de241", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7fbdcc55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6f7fbdcc55-tw5d8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali535cdc8ec33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:32.244648 containerd[1582]: 2025-09-10 06:56:32.112 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.199/32] ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" Sep 10 06:56:32.244648 containerd[1582]: 2025-09-10 06:56:32.113 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali535cdc8ec33 ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" Sep 10 06:56:32.244648 containerd[1582]: 2025-09-10 06:56:32.130 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" Sep 10 06:56:32.244648 containerd[1582]: 2025-09-10 06:56:32.131 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0", GenerateName:"calico-apiserver-6f7fbdcc55-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5cb7d1c-6f0d-4131-b03d-fe53b52de241", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 55, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f7fbdcc55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef", Pod:"calico-apiserver-6f7fbdcc55-tw5d8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.79.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali535cdc8ec33", MAC:"6a:d1:a7:d0:84:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:32.244648 containerd[1582]: 2025-09-10 06:56:32.231 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" Namespace="calico-apiserver" Pod="calico-apiserver-6f7fbdcc55-tw5d8" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--apiserver--6f7fbdcc55--tw5d8-eth0" Sep 10 06:56:32.345979 containerd[1582]: time="2025-09-10T06:56:32.345626960Z" level=info msg="connecting to shim 882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef" address="unix:///run/containerd/s/53eabfddca5e849bb145a849e0d1e3ac0e2fff48aec799bd2bc4050a3e1192df" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:32.428400 systemd[1]: Started cri-containerd-882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef.scope - libcontainer container 882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef. Sep 10 06:56:32.466036 systemd[1]: Started cri-containerd-159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc.scope - libcontainer container 159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc. Sep 10 06:56:32.522004 containerd[1582]: time="2025-09-10T06:56:32.519940112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cqtk9,Uid:1c0c2c0d-f637-47eb-bc16-5e32512b9b12,Namespace:calico-system,Attempt:0,} returns sandbox id \"9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00\"" Sep 10 06:56:32.671501 kubelet[2913]: I0910 06:56:32.661303 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dx6ll" podStartSLOduration=51.629456073 podStartE2EDuration="51.629456073s" podCreationTimestamp="2025-09-10 06:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 06:56:32.62867471 +0000 UTC m=+57.162728123" watchObservedRunningTime="2025-09-10 06:56:32.629456073 +0000 UTC m=+57.163509478" Sep 10 06:56:33.018775 systemd-networkd[1510]: cali6b402aadcb3: Gained IPv6LL Sep 10 06:56:33.035624 containerd[1582]: time="2025-09-10T06:56:33.034137712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" id:\"b8f3d9f15e610728ff8c676a851cb259e1e1409f234b7415a66bb35cea219fb7\" pid:4256 exit_status:1 exited_at:{seconds:1757487393 nanos:25447524}" Sep 10 06:56:33.171030 containerd[1582]: time="2025-09-10T06:56:33.167851599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-nqblr,Uid:0b31fdb8-1f9c-4766-89d1-d243e0a8ca9b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc\"" Sep 10 06:56:33.227473 containerd[1582]: time="2025-09-10T06:56:33.226929988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f7fbdcc55-tw5d8,Uid:b5cb7d1c-6f0d-4131-b03d-fe53b52de241,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef\"" Sep 10 06:56:33.401696 systemd-networkd[1510]: cali95ba3c749bb: Gained IPv6LL Sep 10 06:56:33.785478 systemd-networkd[1510]: cali535cdc8ec33: Gained IPv6LL Sep 10 06:56:34.225739 systemd-networkd[1510]: vxlan.calico: Link UP Sep 10 06:56:34.225755 systemd-networkd[1510]: vxlan.calico: Gained carrier Sep 10 06:56:35.089638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228124959.mount: Deactivated successfully. Sep 10 06:56:35.257861 systemd-networkd[1510]: vxlan.calico: Gained IPv6LL Sep 10 06:56:36.202884 containerd[1582]: time="2025-09-10T06:56:36.202820510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:36.214498 containerd[1582]: time="2025-09-10T06:56:36.204934528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 10 06:56:36.214498 containerd[1582]: time="2025-09-10T06:56:36.205706506Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:36.215752 containerd[1582]: time="2025-09-10T06:56:36.215676964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:36.216834 containerd[1582]: time="2025-09-10T06:56:36.216796298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.347526204s" Sep 10 06:56:36.216926 containerd[1582]: time="2025-09-10T06:56:36.216845399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 10 06:56:36.219936 containerd[1582]: time="2025-09-10T06:56:36.219660972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 06:56:36.226439 containerd[1582]: time="2025-09-10T06:56:36.226344362Z" level=info msg="CreateContainer within sandbox \"5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 06:56:36.252433 containerd[1582]: time="2025-09-10T06:56:36.252183306Z" level=info msg="Container 08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:36.266479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217642624.mount: Deactivated successfully. Sep 10 06:56:36.275461 containerd[1582]: time="2025-09-10T06:56:36.275399004Z" level=info msg="CreateContainer within sandbox \"5db296425b5f271db778925fa49f27622b4b21e06775ff7b639f24acd1345f9e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\"" Sep 10 06:56:36.276999 containerd[1582]: time="2025-09-10T06:56:36.276938048Z" level=info msg="StartContainer for \"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\"" Sep 10 06:56:36.281161 containerd[1582]: time="2025-09-10T06:56:36.281121931Z" level=info msg="connecting to shim 08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed" address="unix:///run/containerd/s/79ed34fa78aad084f8c29457c0cbbcfaed0a409a6387457c9544b378d4e85f39" protocol=ttrpc version=3 Sep 10 06:56:36.343504 systemd[1]: Started cri-containerd-08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed.scope - libcontainer container 08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed. Sep 10 06:56:36.462070 containerd[1582]: time="2025-09-10T06:56:36.460576958Z" level=info msg="StartContainer for \"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" returns successfully" Sep 10 06:56:36.782090 kubelet[2913]: I0910 06:56:36.782009 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-scp8z" podStartSLOduration=55.779297584 podStartE2EDuration="55.779297584s" podCreationTimestamp="2025-09-10 06:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 06:56:32.854251978 +0000 UTC m=+57.388305392" watchObservedRunningTime="2025-09-10 06:56:36.779297584 +0000 UTC m=+61.313350995" Sep 10 06:56:36.793444 kubelet[2913]: I0910 06:56:36.793008 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-b94rd" podStartSLOduration=29.440170614 podStartE2EDuration="35.792956921s" podCreationTimestamp="2025-09-10 06:56:01 +0000 UTC" firstStartedPulling="2025-09-10 06:56:29.865683361 +0000 UTC m=+54.399736757" lastFinishedPulling="2025-09-10 06:56:36.21846966 +0000 UTC m=+60.752523064" observedRunningTime="2025-09-10 06:56:36.778651051 +0000 UTC m=+61.312704479" watchObservedRunningTime="2025-09-10 06:56:36.792956921 +0000 UTC m=+61.327010323" Sep 10 06:56:36.918434 containerd[1582]: time="2025-09-10T06:56:36.918366256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"062be4639799b29538f14a1ba19dde25835bec80fecf366f2bba8740c7d7999d\" pid:4918 exit_status:1 exited_at:{seconds:1757487396 nanos:917658848}" Sep 10 06:56:37.941023 containerd[1582]: time="2025-09-10T06:56:37.940821709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"00c13af28d209007f017338230b2c0afbb03b2153cb62803fecb9da2f899434c\" pid:4942 exit_status:1 exited_at:{seconds:1757487397 nanos:939497420}" Sep 10 06:56:38.026361 containerd[1582]: time="2025-09-10T06:56:38.026299029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:38.028107 containerd[1582]: time="2025-09-10T06:56:38.028075043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 10 06:56:38.029637 containerd[1582]: time="2025-09-10T06:56:38.029603668Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:38.034216 containerd[1582]: time="2025-09-10T06:56:38.033495343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:38.034651 containerd[1582]: time="2025-09-10T06:56:38.034270040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.814542221s" Sep 10 06:56:38.034651 containerd[1582]: time="2025-09-10T06:56:38.034312452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 10 06:56:38.044654 containerd[1582]: time="2025-09-10T06:56:38.044415546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 06:56:38.047169 containerd[1582]: time="2025-09-10T06:56:38.047117216Z" level=info msg="CreateContainer within sandbox \"8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 06:56:38.064557 containerd[1582]: time="2025-09-10T06:56:38.063486678Z" level=info msg="Container a7c72a3752cff1e77a690fd480c9a2695a313198f2b59b6042cf642d7a8a1ed6: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:38.072868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209498741.mount: Deactivated successfully. Sep 10 06:56:38.077883 containerd[1582]: time="2025-09-10T06:56:38.077841562Z" level=info msg="CreateContainer within sandbox \"8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a7c72a3752cff1e77a690fd480c9a2695a313198f2b59b6042cf642d7a8a1ed6\"" Sep 10 06:56:38.079551 containerd[1582]: time="2025-09-10T06:56:38.079356374Z" level=info msg="StartContainer for \"a7c72a3752cff1e77a690fd480c9a2695a313198f2b59b6042cf642d7a8a1ed6\"" Sep 10 06:56:38.084037 containerd[1582]: time="2025-09-10T06:56:38.083999371Z" level=info msg="connecting to shim a7c72a3752cff1e77a690fd480c9a2695a313198f2b59b6042cf642d7a8a1ed6" address="unix:///run/containerd/s/17b13c2084359db857ee1a78a60b6da527b4c4daf786d09a509720915ca895a7" protocol=ttrpc version=3 Sep 10 06:56:38.120474 systemd[1]: Started cri-containerd-a7c72a3752cff1e77a690fd480c9a2695a313198f2b59b6042cf642d7a8a1ed6.scope - libcontainer container a7c72a3752cff1e77a690fd480c9a2695a313198f2b59b6042cf642d7a8a1ed6. Sep 10 06:56:38.207875 containerd[1582]: time="2025-09-10T06:56:38.207728878Z" level=info msg="StartContainer for \"a7c72a3752cff1e77a690fd480c9a2695a313198f2b59b6042cf642d7a8a1ed6\" returns successfully" Sep 10 06:56:38.882856 containerd[1582]: time="2025-09-10T06:56:38.882679865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"f69a727c655103533b3b51656e8b4ecf94181bc5994b26e067f588c1bdad3ded\" pid:4999 exit_status:1 exited_at:{seconds:1757487398 nanos:882354061}" Sep 10 06:56:40.084145 containerd[1582]: time="2025-09-10T06:56:40.084079845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:40.085368 containerd[1582]: time="2025-09-10T06:56:40.084996721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 10 06:56:40.086151 containerd[1582]: time="2025-09-10T06:56:40.086113625Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:40.089109 containerd[1582]: time="2025-09-10T06:56:40.088994632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:40.090200 containerd[1582]: time="2025-09-10T06:56:40.090116828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.045652452s" Sep 10 06:56:40.090671 containerd[1582]: time="2025-09-10T06:56:40.090382005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 10 06:56:40.092019 containerd[1582]: time="2025-09-10T06:56:40.091895846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 06:56:40.094080 containerd[1582]: time="2025-09-10T06:56:40.094027911Z" level=info msg="CreateContainer within sandbox \"9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 06:56:40.111510 containerd[1582]: time="2025-09-10T06:56:40.111438110Z" level=info msg="Container 10b98ca8927508b79830d4f97ebebd212d6678c57809f062554ee5fec7dcc0cf: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:40.129605 containerd[1582]: time="2025-09-10T06:56:40.129538382Z" level=info msg="CreateContainer within sandbox \"9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"10b98ca8927508b79830d4f97ebebd212d6678c57809f062554ee5fec7dcc0cf\"" Sep 10 06:56:40.130811 containerd[1582]: time="2025-09-10T06:56:40.130769668Z" level=info msg="StartContainer for \"10b98ca8927508b79830d4f97ebebd212d6678c57809f062554ee5fec7dcc0cf\"" Sep 10 06:56:40.132634 containerd[1582]: time="2025-09-10T06:56:40.132565518Z" level=info msg="connecting to shim 10b98ca8927508b79830d4f97ebebd212d6678c57809f062554ee5fec7dcc0cf" address="unix:///run/containerd/s/fbbf927e4e46adb6f9794fa08effac9ae677f9324eb6a583fe998b38fa7d7714" protocol=ttrpc version=3 Sep 10 06:56:40.180481 systemd[1]: Started cri-containerd-10b98ca8927508b79830d4f97ebebd212d6678c57809f062554ee5fec7dcc0cf.scope - libcontainer container 10b98ca8927508b79830d4f97ebebd212d6678c57809f062554ee5fec7dcc0cf. Sep 10 06:56:40.256784 containerd[1582]: time="2025-09-10T06:56:40.256708954Z" level=info msg="StartContainer for \"10b98ca8927508b79830d4f97ebebd212d6678c57809f062554ee5fec7dcc0cf\" returns successfully" Sep 10 06:56:41.741368 containerd[1582]: time="2025-09-10T06:56:41.741257547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5986898b57-mfkdt,Uid:2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86,Namespace:calico-system,Attempt:0,}" Sep 10 06:56:42.197806 systemd-networkd[1510]: cali99c0b6d64be: Link UP Sep 10 06:56:42.200970 systemd-networkd[1510]: cali99c0b6d64be: Gained carrier Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.034 [INFO][5052] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0 calico-kube-controllers-5986898b57- calico-system 2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86 856 0 2025-09-10 06:56:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5986898b57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-fpwqg.gb1.brightbox.com calico-kube-controllers-5986898b57-mfkdt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali99c0b6d64be [] [] }} ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.035 [INFO][5052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.111 [INFO][5063] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" HandleID="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.111 [INFO][5063] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" HandleID="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-fpwqg.gb1.brightbox.com", "pod":"calico-kube-controllers-5986898b57-mfkdt", "timestamp":"2025-09-10 06:56:42.11108983 +0000 UTC"}, Hostname:"srv-fpwqg.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.111 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.111 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.111 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-fpwqg.gb1.brightbox.com' Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.125 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.133 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.140 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.142 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.147 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.79.192/26 host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.148 [INFO][5063] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.79.192/26 handle="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.151 [INFO][5063] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.160 [INFO][5063] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.79.192/26 handle="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.174 [INFO][5063] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.79.200/26] block=192.168.79.192/26 handle="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.174 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.79.200/26] handle="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" host="srv-fpwqg.gb1.brightbox.com" Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.174 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 06:56:42.234698 containerd[1582]: 2025-09-10 06:56:42.174 [INFO][5063] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.79.200/26] IPv6=[] ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" HandleID="k8s-pod-network.641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Workload="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" Sep 10 06:56:42.245538 containerd[1582]: 2025-09-10 06:56:42.190 [INFO][5052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0", GenerateName:"calico-kube-controllers-5986898b57-", Namespace:"calico-system", SelfLink:"", UID:"2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5986898b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5986898b57-mfkdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99c0b6d64be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:42.245538 containerd[1582]: 2025-09-10 06:56:42.190 [INFO][5052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.79.200/32] ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" Sep 10 06:56:42.245538 containerd[1582]: 2025-09-10 06:56:42.191 [INFO][5052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99c0b6d64be ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" Sep 10 06:56:42.245538 containerd[1582]: 2025-09-10 06:56:42.202 [INFO][5052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" Sep 10 06:56:42.245538 containerd[1582]: 2025-09-10 06:56:42.203 [INFO][5052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0", GenerateName:"calico-kube-controllers-5986898b57-", Namespace:"calico-system", SelfLink:"", UID:"2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 6, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5986898b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-fpwqg.gb1.brightbox.com", ContainerID:"641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e", Pod:"calico-kube-controllers-5986898b57-mfkdt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.79.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99c0b6d64be", MAC:"02:d7:77:5b:9c:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 06:56:42.245538 containerd[1582]: 2025-09-10 06:56:42.231 [INFO][5052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" Namespace="calico-system" Pod="calico-kube-controllers-5986898b57-mfkdt" WorkloadEndpoint="srv--fpwqg.gb1.brightbox.com-k8s-calico--kube--controllers--5986898b57--mfkdt-eth0" Sep 10 06:56:42.459607 containerd[1582]: time="2025-09-10T06:56:42.459269230Z" level=info msg="connecting to shim 641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e" address="unix:///run/containerd/s/9bbc20bfe376ccec67e6e3e2c360b99a746f2b96b03a94b82765a8c4922277a4" namespace=k8s.io protocol=ttrpc version=3 Sep 10 06:56:42.541456 systemd[1]: Started cri-containerd-641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e.scope - libcontainer container 641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e. Sep 10 06:56:42.661830 containerd[1582]: time="2025-09-10T06:56:42.661721712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5986898b57-mfkdt,Uid:2ccd7dc8-b3cc-4530-a40f-ef06ec4a8e86,Namespace:calico-system,Attempt:0,} returns sandbox id \"641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e\"" Sep 10 06:56:43.513429 systemd-networkd[1510]: cali99c0b6d64be: Gained IPv6LL Sep 10 06:56:45.554265 containerd[1582]: time="2025-09-10T06:56:45.553501420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:45.556417 containerd[1582]: time="2025-09-10T06:56:45.556370152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 10 06:56:45.557904 containerd[1582]: time="2025-09-10T06:56:45.557784562Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:45.562020 containerd[1582]: time="2025-09-10T06:56:45.561179403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:45.563374 containerd[1582]: time="2025-09-10T06:56:45.561927942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.469908269s" Sep 10 06:56:45.563374 containerd[1582]: time="2025-09-10T06:56:45.562509705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 06:56:45.566376 containerd[1582]: time="2025-09-10T06:56:45.566277876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 06:56:45.568640 containerd[1582]: time="2025-09-10T06:56:45.568154784Z" level=info msg="CreateContainer within sandbox \"159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 06:56:45.595911 containerd[1582]: time="2025-09-10T06:56:45.595852136Z" level=info msg="Container 01a44f27dfb83de7ec1cd505a643b8a6f80b6bcef431dcb5559a4104c01e9200: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:45.606131 containerd[1582]: time="2025-09-10T06:56:45.606052528Z" level=info msg="CreateContainer within sandbox \"159fe341040ab3a1f764f85e768ae59773be67b332020f83550d46006818ddfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"01a44f27dfb83de7ec1cd505a643b8a6f80b6bcef431dcb5559a4104c01e9200\"" Sep 10 06:56:45.607435 containerd[1582]: time="2025-09-10T06:56:45.607388796Z" level=info msg="StartContainer for \"01a44f27dfb83de7ec1cd505a643b8a6f80b6bcef431dcb5559a4104c01e9200\"" Sep 10 06:56:45.612748 containerd[1582]: time="2025-09-10T06:56:45.612643406Z" level=info msg="connecting to shim 01a44f27dfb83de7ec1cd505a643b8a6f80b6bcef431dcb5559a4104c01e9200" address="unix:///run/containerd/s/c5da5ab67c6da680267ea09714c993f934f0453b22c16bf4f8a144e996dadc01" protocol=ttrpc version=3 Sep 10 06:56:45.659573 systemd[1]: Started cri-containerd-01a44f27dfb83de7ec1cd505a643b8a6f80b6bcef431dcb5559a4104c01e9200.scope - libcontainer container 01a44f27dfb83de7ec1cd505a643b8a6f80b6bcef431dcb5559a4104c01e9200. Sep 10 06:56:45.784496 containerd[1582]: time="2025-09-10T06:56:45.784440914Z" level=info msg="StartContainer for \"01a44f27dfb83de7ec1cd505a643b8a6f80b6bcef431dcb5559a4104c01e9200\" returns successfully" Sep 10 06:56:45.993257 kubelet[2913]: I0910 06:56:45.992914 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-nqblr" podStartSLOduration=38.604035131 podStartE2EDuration="50.99286565s" podCreationTimestamp="2025-09-10 06:55:55 +0000 UTC" firstStartedPulling="2025-09-10 06:56:33.175311583 +0000 UTC m=+57.709364980" lastFinishedPulling="2025-09-10 06:56:45.564142102 +0000 UTC m=+70.098195499" observedRunningTime="2025-09-10 06:56:45.989920286 +0000 UTC m=+70.523973702" watchObservedRunningTime="2025-09-10 06:56:45.99286565 +0000 UTC m=+70.526919052" Sep 10 06:56:46.153258 containerd[1582]: time="2025-09-10T06:56:46.152785873Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:46.154769 containerd[1582]: time="2025-09-10T06:56:46.154430437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 06:56:46.158042 containerd[1582]: time="2025-09-10T06:56:46.157985598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 591.6477ms" Sep 10 06:56:46.158042 containerd[1582]: time="2025-09-10T06:56:46.158032893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 06:56:46.159965 containerd[1582]: time="2025-09-10T06:56:46.159395349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 06:56:46.163207 containerd[1582]: time="2025-09-10T06:56:46.163139635Z" level=info msg="CreateContainer within sandbox \"882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 06:56:46.205656 containerd[1582]: time="2025-09-10T06:56:46.205453987Z" level=info msg="Container 723c9faf3969586f806c74e6a27e3baecfd646ecd7a7df1363a9bfecc7de313d: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:46.220462 containerd[1582]: time="2025-09-10T06:56:46.220304204Z" level=info msg="CreateContainer within sandbox \"882bf226c919bb539875961c2fbf6e9c74f396fc477c4e654cbb29bebfa004ef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"723c9faf3969586f806c74e6a27e3baecfd646ecd7a7df1363a9bfecc7de313d\"" Sep 10 06:56:46.221498 containerd[1582]: time="2025-09-10T06:56:46.221461544Z" level=info msg="StartContainer for \"723c9faf3969586f806c74e6a27e3baecfd646ecd7a7df1363a9bfecc7de313d\"" Sep 10 06:56:46.225057 containerd[1582]: time="2025-09-10T06:56:46.225020003Z" level=info msg="connecting to shim 723c9faf3969586f806c74e6a27e3baecfd646ecd7a7df1363a9bfecc7de313d" address="unix:///run/containerd/s/53eabfddca5e849bb145a849e0d1e3ac0e2fff48aec799bd2bc4050a3e1192df" protocol=ttrpc version=3 Sep 10 06:56:46.271452 systemd[1]: Started cri-containerd-723c9faf3969586f806c74e6a27e3baecfd646ecd7a7df1363a9bfecc7de313d.scope - libcontainer container 723c9faf3969586f806c74e6a27e3baecfd646ecd7a7df1363a9bfecc7de313d. Sep 10 06:56:46.400570 containerd[1582]: time="2025-09-10T06:56:46.400463732Z" level=info msg="StartContainer for \"723c9faf3969586f806c74e6a27e3baecfd646ecd7a7df1363a9bfecc7de313d\" returns successfully" Sep 10 06:56:47.916860 kubelet[2913]: I0910 06:56:47.916765 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 06:56:47.933232 kubelet[2913]: I0910 06:56:47.932524 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f7fbdcc55-tw5d8" podStartSLOduration=40.019362709 podStartE2EDuration="52.932482739s" podCreationTimestamp="2025-09-10 06:55:55 +0000 UTC" firstStartedPulling="2025-09-10 06:56:33.246041719 +0000 UTC m=+57.780095116" lastFinishedPulling="2025-09-10 06:56:46.159161742 +0000 UTC m=+70.693215146" observedRunningTime="2025-09-10 06:56:46.957536231 +0000 UTC m=+71.491589636" watchObservedRunningTime="2025-09-10 06:56:47.932482739 +0000 UTC m=+72.466536148" Sep 10 06:56:50.097590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132115526.mount: Deactivated successfully. Sep 10 06:56:50.115419 containerd[1582]: time="2025-09-10T06:56:50.115284666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:50.119215 containerd[1582]: time="2025-09-10T06:56:50.119151898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 10 06:56:50.124663 containerd[1582]: time="2025-09-10T06:56:50.123171752Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:50.127979 containerd[1582]: time="2025-09-10T06:56:50.127943800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:50.130182 containerd[1582]: time="2025-09-10T06:56:50.130141232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.970681215s" Sep 10 06:56:50.130297 containerd[1582]: time="2025-09-10T06:56:50.130207904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 10 06:56:50.132748 containerd[1582]: time="2025-09-10T06:56:50.132703606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 06:56:50.138299 containerd[1582]: time="2025-09-10T06:56:50.138161701Z" level=info msg="CreateContainer within sandbox \"8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 06:56:50.167393 containerd[1582]: time="2025-09-10T06:56:50.167339067Z" level=info msg="Container 6a4da6aeffa077ac692e13469ff03707c31dbd52bfa15cc5965adceb8ce255da: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:50.195790 containerd[1582]: time="2025-09-10T06:56:50.195739665Z" level=info msg="CreateContainer within sandbox \"8a7acaf483a02b09406d9c2ecbe43dee7da3538ced619672fcc2535258edd2e7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6a4da6aeffa077ac692e13469ff03707c31dbd52bfa15cc5965adceb8ce255da\"" Sep 10 06:56:50.207724 containerd[1582]: time="2025-09-10T06:56:50.207561322Z" level=info msg="StartContainer for \"6a4da6aeffa077ac692e13469ff03707c31dbd52bfa15cc5965adceb8ce255da\"" Sep 10 06:56:50.211875 containerd[1582]: time="2025-09-10T06:56:50.211393599Z" level=info msg="connecting to shim 6a4da6aeffa077ac692e13469ff03707c31dbd52bfa15cc5965adceb8ce255da" address="unix:///run/containerd/s/17b13c2084359db857ee1a78a60b6da527b4c4daf786d09a509720915ca895a7" protocol=ttrpc version=3 Sep 10 06:56:50.298455 systemd[1]: Started cri-containerd-6a4da6aeffa077ac692e13469ff03707c31dbd52bfa15cc5965adceb8ce255da.scope - libcontainer container 6a4da6aeffa077ac692e13469ff03707c31dbd52bfa15cc5965adceb8ce255da. Sep 10 06:56:50.393924 containerd[1582]: time="2025-09-10T06:56:50.393437742Z" level=info msg="StartContainer for \"6a4da6aeffa077ac692e13469ff03707c31dbd52bfa15cc5965adceb8ce255da\" returns successfully" Sep 10 06:56:50.944817 kubelet[2913]: I0910 06:56:50.944720 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bb584cc4c-ptsbs" podStartSLOduration=3.188746052 podStartE2EDuration="21.944537158s" podCreationTimestamp="2025-09-10 06:56:29 +0000 UTC" firstStartedPulling="2025-09-10 06:56:31.376030002 +0000 UTC m=+55.910083398" lastFinishedPulling="2025-09-10 06:56:50.131821097 +0000 UTC m=+74.665874504" observedRunningTime="2025-09-10 06:56:50.943412361 +0000 UTC m=+75.477465775" watchObservedRunningTime="2025-09-10 06:56:50.944537158 +0000 UTC m=+75.478590567" Sep 10 06:56:51.982358 containerd[1582]: time="2025-09-10T06:56:51.982229036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"6fb2cd00d4c6f5aa7f7ec701646e893927636817e152d87d04ddb1bf6c0cde96\" pid:5275 exited_at:{seconds:1757487411 nanos:955415060}" Sep 10 06:56:52.592941 containerd[1582]: time="2025-09-10T06:56:52.592772163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:52.595839 containerd[1582]: time="2025-09-10T06:56:52.595796184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 10 06:56:52.598681 containerd[1582]: time="2025-09-10T06:56:52.598575085Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:52.601220 containerd[1582]: time="2025-09-10T06:56:52.601129932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:52.602356 containerd[1582]: time="2025-09-10T06:56:52.602169483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.469425953s" Sep 10 06:56:52.602356 containerd[1582]: time="2025-09-10T06:56:52.602237536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 10 06:56:52.699704 containerd[1582]: time="2025-09-10T06:56:52.699487925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 06:56:52.716559 containerd[1582]: time="2025-09-10T06:56:52.715265538Z" level=info msg="CreateContainer within sandbox \"9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 06:56:52.767755 containerd[1582]: time="2025-09-10T06:56:52.767697263Z" level=info msg="Container f06174e2c8268c7b1bc2757ac70f05dd3d19b0110a848b13fe85576ddffc5308: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:52.785853 containerd[1582]: time="2025-09-10T06:56:52.785734358Z" level=info msg="CreateContainer within sandbox \"9318b59df6914a69aec61a10f5a0634d8551423c1342d72e949178d8ab597f00\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f06174e2c8268c7b1bc2757ac70f05dd3d19b0110a848b13fe85576ddffc5308\"" Sep 10 06:56:52.786811 containerd[1582]: time="2025-09-10T06:56:52.786777392Z" level=info msg="StartContainer for \"f06174e2c8268c7b1bc2757ac70f05dd3d19b0110a848b13fe85576ddffc5308\"" Sep 10 06:56:52.790174 containerd[1582]: time="2025-09-10T06:56:52.790137212Z" level=info msg="connecting to shim f06174e2c8268c7b1bc2757ac70f05dd3d19b0110a848b13fe85576ddffc5308" address="unix:///run/containerd/s/fbbf927e4e46adb6f9794fa08effac9ae677f9324eb6a583fe998b38fa7d7714" protocol=ttrpc version=3 Sep 10 06:56:52.841708 systemd[1]: Started cri-containerd-f06174e2c8268c7b1bc2757ac70f05dd3d19b0110a848b13fe85576ddffc5308.scope - libcontainer container f06174e2c8268c7b1bc2757ac70f05dd3d19b0110a848b13fe85576ddffc5308. Sep 10 06:56:52.957293 containerd[1582]: time="2025-09-10T06:56:52.956957688Z" level=info msg="StartContainer for \"f06174e2c8268c7b1bc2757ac70f05dd3d19b0110a848b13fe85576ddffc5308\" returns successfully" Sep 10 06:56:54.035376 kubelet[2913]: I0910 06:56:54.035144 2913 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 06:56:54.038992 kubelet[2913]: I0910 06:56:54.035541 2913 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 06:56:59.200459 containerd[1582]: time="2025-09-10T06:56:59.200370414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:59.203639 containerd[1582]: time="2025-09-10T06:56:59.203584114Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:59.204401 containerd[1582]: time="2025-09-10T06:56:59.204323722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 10 06:56:59.210841 containerd[1582]: time="2025-09-10T06:56:59.210778874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 06:56:59.213351 containerd[1582]: time="2025-09-10T06:56:59.213297450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.513733903s" Sep 10 06:56:59.213439 containerd[1582]: time="2025-09-10T06:56:59.213354606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 10 06:56:59.359447 containerd[1582]: time="2025-09-10T06:56:59.358499858Z" level=info msg="CreateContainer within sandbox \"641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 06:56:59.416289 containerd[1582]: time="2025-09-10T06:56:59.414470894Z" level=info msg="Container f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423: CDI devices from CRI Config.CDIDevices: []" Sep 10 06:56:59.449763 containerd[1582]: time="2025-09-10T06:56:59.449704992Z" level=info msg="CreateContainer within sandbox \"641886843f52a9f7d4d804969262396d18b106be1cbda237c152d160ec70137e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\"" Sep 10 06:56:59.453249 containerd[1582]: time="2025-09-10T06:56:59.452526914Z" level=info msg="StartContainer for \"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\"" Sep 10 06:56:59.476793 containerd[1582]: time="2025-09-10T06:56:59.476329184Z" level=info msg="connecting to shim f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423" address="unix:///run/containerd/s/9bbc20bfe376ccec67e6e3e2c360b99a746f2b96b03a94b82765a8c4922277a4" protocol=ttrpc version=3 Sep 10 06:56:59.604621 systemd[1]: Started cri-containerd-f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423.scope - libcontainer container f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423. Sep 10 06:56:59.728453 containerd[1582]: time="2025-09-10T06:56:59.727159164Z" level=info msg="StartContainer for \"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\" returns successfully" Sep 10 06:57:00.049210 kubelet[2913]: I0910 06:57:00.045886 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5986898b57-mfkdt" podStartSLOduration=41.488411036 podStartE2EDuration="58.040814686s" podCreationTimestamp="2025-09-10 06:56:02 +0000 UTC" firstStartedPulling="2025-09-10 06:56:42.663568733 +0000 UTC m=+67.197622122" lastFinishedPulling="2025-09-10 06:56:59.21597237 +0000 UTC m=+83.750025772" observedRunningTime="2025-09-10 06:57:00.03989596 +0000 UTC m=+84.573949369" watchObservedRunningTime="2025-09-10 06:57:00.040814686 +0000 UTC m=+84.574868098" Sep 10 06:57:00.051065 kubelet[2913]: I0910 06:57:00.050944 2913 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cqtk9" podStartSLOduration=38.975250596 podStartE2EDuration="59.050801884s" podCreationTimestamp="2025-09-10 06:56:01 +0000 UTC" firstStartedPulling="2025-09-10 06:56:32.612629146 +0000 UTC m=+57.146682541" lastFinishedPulling="2025-09-10 06:56:52.688180421 +0000 UTC m=+77.222233829" observedRunningTime="2025-09-10 06:56:54.015715644 +0000 UTC m=+78.549769055" watchObservedRunningTime="2025-09-10 06:57:00.050801884 +0000 UTC m=+84.584855304" Sep 10 06:57:00.259844 containerd[1582]: time="2025-09-10T06:57:00.259774816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\" id:\"1a73a7a613b50666b61014eb5bf6cdeaba22834e518a6f11343d25e3f0ea6877\" pid:5390 exited_at:{seconds:1757487420 nanos:229596919}" Sep 10 06:57:00.709605 containerd[1582]: time="2025-09-10T06:57:00.709307347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" id:\"eddd3498cb4fc0fc98884b5a75c29ff66ddfbf14ecdb38101f09da313d2eb6f0\" pid:5411 exited_at:{seconds:1757487420 nanos:708443246}" Sep 10 06:57:01.504231 kubelet[2913]: I0910 06:57:01.503785 2913 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 06:57:09.236765 containerd[1582]: time="2025-09-10T06:57:09.236701389Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"1bdc4c92f840bd34e7afb677fdc070af3a1750713ed524a356cc9e70b6b014c9\" pid:5450 exited_at:{seconds:1757487429 nanos:236334490}" Sep 10 06:57:09.244236 containerd[1582]: time="2025-09-10T06:57:09.244159485Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\" id:\"f8b355cc8e64b0c3cfdcda79a2a26184dc7d726e74e0735407457b52a742576a\" pid:5472 exited_at:{seconds:1757487429 nanos:240782222}" Sep 10 06:57:12.145983 systemd[1]: Started sshd@10-10.244.28.170:22-196.251.118.184:60136.service - OpenSSH per-connection server daemon (196.251.118.184:60136). Sep 10 06:57:12.973706 sshd[5484]: kex_exchange_identification: read: Connection reset by peer Sep 10 06:57:12.973706 sshd[5484]: Connection reset by 196.251.118.184 port 60136 Sep 10 06:57:12.975058 systemd[1]: sshd@10-10.244.28.170:22-196.251.118.184:60136.service: Deactivated successfully. Sep 10 06:57:13.032283 systemd[1]: Started sshd@11-10.244.28.170:22-196.251.118.184:60150.service - OpenSSH per-connection server daemon (196.251.118.184:60150). Sep 10 06:57:16.357132 sshd[5491]: Invalid user toor from 196.251.118.184 port 60150 Sep 10 06:57:17.227000 sshd[5491]: Connection closed by invalid user toor 196.251.118.184 port 60150 [preauth] Sep 10 06:57:17.234012 systemd[1]: sshd@11-10.244.28.170:22-196.251.118.184:60150.service: Deactivated successfully. Sep 10 06:57:21.325651 systemd[1]: Started sshd@12-10.244.28.170:22-139.178.89.65:54046.service - OpenSSH per-connection server daemon (139.178.89.65:54046). Sep 10 06:57:22.264086 sshd[5506]: Accepted publickey for core from 139.178.89.65 port 54046 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:57:22.269062 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:57:22.306507 systemd-logind[1560]: New session 12 of user core. Sep 10 06:57:22.313840 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 06:57:23.599103 sshd[5510]: Connection closed by 139.178.89.65 port 54046 Sep 10 06:57:23.605447 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Sep 10 06:57:23.626854 systemd[1]: sshd@12-10.244.28.170:22-139.178.89.65:54046.service: Deactivated successfully. Sep 10 06:57:23.631162 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 06:57:23.644376 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Sep 10 06:57:23.646766 systemd-logind[1560]: Removed session 12. Sep 10 06:57:28.748510 systemd[1]: Started sshd@13-10.244.28.170:22-139.178.89.65:54060.service - OpenSSH per-connection server daemon (139.178.89.65:54060). Sep 10 06:57:29.756440 sshd[5523]: Accepted publickey for core from 139.178.89.65 port 54060 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:57:29.759281 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:57:29.773908 systemd-logind[1560]: New session 13 of user core. Sep 10 06:57:29.777527 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 06:57:30.329183 containerd[1582]: time="2025-09-10T06:57:30.329080925Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\" id:\"5320de85df2a482296663bc373dfde225f02a606584c6ecd8f04e204b72ddbc7\" pid:5542 exited_at:{seconds:1757487450 nanos:238528035}" Sep 10 06:57:30.689548 sshd[5527]: Connection closed by 139.178.89.65 port 54060 Sep 10 06:57:30.691672 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Sep 10 06:57:30.705321 systemd[1]: sshd@13-10.244.28.170:22-139.178.89.65:54060.service: Deactivated successfully. Sep 10 06:57:30.711598 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 06:57:30.716180 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Sep 10 06:57:30.720841 systemd-logind[1560]: Removed session 13. Sep 10 06:57:30.781527 containerd[1582]: time="2025-09-10T06:57:30.781457735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" id:\"0ffa168b5785e97607bd9a88c0c1ce39bfa125ee7f7a865a403cd6a751b99cad\" pid:5560 exited_at:{seconds:1757487450 nanos:779478148}" Sep 10 06:57:35.850378 systemd[1]: Started sshd@14-10.244.28.170:22-139.178.89.65:37200.service - OpenSSH per-connection server daemon (139.178.89.65:37200). Sep 10 06:57:36.794177 sshd[5591]: Accepted publickey for core from 139.178.89.65 port 37200 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:57:36.798379 sshd-session[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:57:36.808519 systemd-logind[1560]: New session 14 of user core. Sep 10 06:57:36.816495 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 06:57:37.618950 sshd[5594]: Connection closed by 139.178.89.65 port 37200 Sep 10 06:57:37.622450 sshd-session[5591]: pam_unix(sshd:session): session closed for user core Sep 10 06:57:37.639604 systemd[1]: sshd@14-10.244.28.170:22-139.178.89.65:37200.service: Deactivated successfully. Sep 10 06:57:37.645321 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 06:57:37.648071 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Sep 10 06:57:37.650257 systemd-logind[1560]: Removed session 14. Sep 10 06:57:37.781543 systemd[1]: Started sshd@15-10.244.28.170:22-139.178.89.65:37206.service - OpenSSH per-connection server daemon (139.178.89.65:37206). Sep 10 06:57:38.737787 sshd[5607]: Accepted publickey for core from 139.178.89.65 port 37206 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:57:38.741172 sshd-session[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:57:38.750016 systemd-logind[1560]: New session 15 of user core. Sep 10 06:57:38.759304 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 06:57:39.411171 containerd[1582]: time="2025-09-10T06:57:39.410894002Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"b6f69ea442707a1de0c60c883912d556b338a95ceed0687121284fb710d343d3\" pid:5624 exited_at:{seconds:1757487459 nanos:410004729}" Sep 10 06:57:39.674329 sshd[5611]: Connection closed by 139.178.89.65 port 37206 Sep 10 06:57:39.681568 sshd-session[5607]: pam_unix(sshd:session): session closed for user core Sep 10 06:57:39.703114 systemd[1]: sshd@15-10.244.28.170:22-139.178.89.65:37206.service: Deactivated successfully. Sep 10 06:57:39.703672 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Sep 10 06:57:39.708582 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 06:57:39.715962 systemd-logind[1560]: Removed session 15. Sep 10 06:57:39.835684 systemd[1]: Started sshd@16-10.244.28.170:22-139.178.89.65:37214.service - OpenSSH per-connection server daemon (139.178.89.65:37214). Sep 10 06:57:40.800426 sshd[5643]: Accepted publickey for core from 139.178.89.65 port 37214 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:57:40.803158 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:57:40.822158 systemd-logind[1560]: New session 16 of user core. Sep 10 06:57:40.826527 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 06:57:41.606930 sshd[5646]: Connection closed by 139.178.89.65 port 37214 Sep 10 06:57:41.608543 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Sep 10 06:57:41.615373 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Sep 10 06:57:41.616303 systemd[1]: sshd@16-10.244.28.170:22-139.178.89.65:37214.service: Deactivated successfully. Sep 10 06:57:41.620370 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 06:57:41.626127 systemd-logind[1560]: Removed session 16. Sep 10 06:57:46.765354 systemd[1]: Started sshd@17-10.244.28.170:22-139.178.89.65:58388.service - OpenSSH per-connection server daemon (139.178.89.65:58388). Sep 10 06:57:47.694925 sshd[5660]: Accepted publickey for core from 139.178.89.65 port 58388 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:57:47.700614 sshd-session[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:57:47.712579 systemd-logind[1560]: New session 17 of user core. Sep 10 06:57:47.724502 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 06:57:48.526956 sshd[5663]: Connection closed by 139.178.89.65 port 58388 Sep 10 06:57:48.528507 sshd-session[5660]: pam_unix(sshd:session): session closed for user core Sep 10 06:57:48.541332 systemd[1]: sshd@17-10.244.28.170:22-139.178.89.65:58388.service: Deactivated successfully. Sep 10 06:57:48.548122 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 06:57:48.550780 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Sep 10 06:57:48.556139 systemd-logind[1560]: Removed session 17. Sep 10 06:57:51.881475 containerd[1582]: time="2025-09-10T06:57:51.881413604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"25d5fe15de2dbc26d559517b78d9a442248c2a529cd7d9523f2789ec42adf78a\" pid:5691 exited_at:{seconds:1757487471 nanos:880502242}" Sep 10 06:57:53.678232 systemd[1]: Started sshd@18-10.244.28.170:22-139.178.89.65:59790.service - OpenSSH per-connection server daemon (139.178.89.65:59790). Sep 10 06:57:54.609015 sshd[5702]: Accepted publickey for core from 139.178.89.65 port 59790 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:57:54.611766 sshd-session[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:57:54.619555 systemd-logind[1560]: New session 18 of user core. Sep 10 06:57:54.627468 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 06:57:55.455131 sshd[5705]: Connection closed by 139.178.89.65 port 59790 Sep 10 06:57:55.457878 sshd-session[5702]: pam_unix(sshd:session): session closed for user core Sep 10 06:57:55.469527 systemd[1]: sshd@18-10.244.28.170:22-139.178.89.65:59790.service: Deactivated successfully. Sep 10 06:57:55.473072 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 06:57:55.476641 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Sep 10 06:57:55.480044 systemd-logind[1560]: Removed session 18. Sep 10 06:58:00.129429 containerd[1582]: time="2025-09-10T06:58:00.129276078Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\" id:\"4e2a5d1b1283d6f1d3499147e9386880c37e681b5ae72acfcb657704bbfe6f14\" pid:5734 exited_at:{seconds:1757487480 nanos:128472298}" Sep 10 06:58:00.532609 containerd[1582]: time="2025-09-10T06:58:00.532511437Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" id:\"23937c4743d146c9f8c737736aab7f668cbd24184b202640ed8af6ef6520dcab\" pid:5755 exited_at:{seconds:1757487480 nanos:531531326}" Sep 10 06:58:00.619111 systemd[1]: Started sshd@19-10.244.28.170:22-139.178.89.65:48816.service - OpenSSH per-connection server daemon (139.178.89.65:48816). Sep 10 06:58:01.643220 sshd[5768]: Accepted publickey for core from 139.178.89.65 port 48816 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:01.654360 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:01.666940 systemd-logind[1560]: New session 19 of user core. Sep 10 06:58:01.676622 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 06:58:02.757411 sshd[5771]: Connection closed by 139.178.89.65 port 48816 Sep 10 06:58:02.759740 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:02.765840 systemd[1]: sshd@19-10.244.28.170:22-139.178.89.65:48816.service: Deactivated successfully. Sep 10 06:58:02.772667 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 06:58:02.776270 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Sep 10 06:58:02.780844 systemd-logind[1560]: Removed session 19. Sep 10 06:58:02.921541 systemd[1]: Started sshd@20-10.244.28.170:22-139.178.89.65:48830.service - OpenSSH per-connection server daemon (139.178.89.65:48830). Sep 10 06:58:03.853219 sshd[5786]: Accepted publickey for core from 139.178.89.65 port 48830 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:03.855023 sshd-session[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:03.863737 systemd-logind[1560]: New session 20 of user core. Sep 10 06:58:03.868428 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 06:58:04.949548 sshd[5789]: Connection closed by 139.178.89.65 port 48830 Sep 10 06:58:04.953472 sshd-session[5786]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:04.968539 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Sep 10 06:58:04.970068 systemd[1]: sshd@20-10.244.28.170:22-139.178.89.65:48830.service: Deactivated successfully. Sep 10 06:58:04.977810 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 06:58:04.982873 systemd-logind[1560]: Removed session 20. Sep 10 06:58:05.103862 systemd[1]: Started sshd@21-10.244.28.170:22-139.178.89.65:48840.service - OpenSSH per-connection server daemon (139.178.89.65:48840). Sep 10 06:58:06.073217 sshd[5799]: Accepted publickey for core from 139.178.89.65 port 48840 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:06.076080 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:06.096081 systemd-logind[1560]: New session 21 of user core. Sep 10 06:58:06.104673 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 06:58:07.934449 sshd[5803]: Connection closed by 139.178.89.65 port 48840 Sep 10 06:58:07.935601 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:07.951090 systemd[1]: sshd@21-10.244.28.170:22-139.178.89.65:48840.service: Deactivated successfully. Sep 10 06:58:07.958242 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 06:58:07.959898 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Sep 10 06:58:07.965424 systemd-logind[1560]: Removed session 21. Sep 10 06:58:08.098796 systemd[1]: Started sshd@22-10.244.28.170:22-139.178.89.65:48852.service - OpenSSH per-connection server daemon (139.178.89.65:48852). Sep 10 06:58:09.067139 sshd[5839]: Accepted publickey for core from 139.178.89.65 port 48852 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:09.070625 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:09.081015 systemd-logind[1560]: New session 22 of user core. Sep 10 06:58:09.091305 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 06:58:09.194739 containerd[1582]: time="2025-09-10T06:58:09.194546024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\" id:\"aa3de253d7381c39b70fd6da6b00a52032f3c274261e7f93e4fe392e51100c8b\" pid:5872 exited_at:{seconds:1757487489 nanos:193999506}" Sep 10 06:58:09.282097 containerd[1582]: time="2025-09-10T06:58:09.281913554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"08de627ceeb23f4263d970cb9d971aeb2b22c0ab9a8322ce3d0d31988741b2ed\" id:\"af5095e5885ba4f0db3738e17d485185d8ffb66538f4bf9f2034b41c36a5f6b2\" pid:5854 exited_at:{seconds:1757487489 nanos:281510009}" Sep 10 06:58:10.700474 sshd[5878]: Connection closed by 139.178.89.65 port 48852 Sep 10 06:58:10.715667 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:10.731800 systemd[1]: sshd@22-10.244.28.170:22-139.178.89.65:48852.service: Deactivated successfully. Sep 10 06:58:10.738397 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 06:58:10.741356 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Sep 10 06:58:10.746389 systemd-logind[1560]: Removed session 22. Sep 10 06:58:10.866015 systemd[1]: Started sshd@23-10.244.28.170:22-139.178.89.65:55420.service - OpenSSH per-connection server daemon (139.178.89.65:55420). Sep 10 06:58:11.851333 sshd[5895]: Accepted publickey for core from 139.178.89.65 port 55420 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:11.855119 sshd-session[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:11.867344 systemd-logind[1560]: New session 23 of user core. Sep 10 06:58:11.874713 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 06:58:12.700392 sshd[5898]: Connection closed by 139.178.89.65 port 55420 Sep 10 06:58:12.703118 sshd-session[5895]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:12.714372 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Sep 10 06:58:12.716167 systemd[1]: sshd@23-10.244.28.170:22-139.178.89.65:55420.service: Deactivated successfully. Sep 10 06:58:12.722742 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 06:58:12.729517 systemd-logind[1560]: Removed session 23. Sep 10 06:58:17.866247 systemd[1]: Started sshd@24-10.244.28.170:22-139.178.89.65:55428.service - OpenSSH per-connection server daemon (139.178.89.65:55428). Sep 10 06:58:18.819239 sshd[5915]: Accepted publickey for core from 139.178.89.65 port 55428 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:18.821125 sshd-session[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:18.830254 systemd-logind[1560]: New session 24 of user core. Sep 10 06:58:18.837529 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 06:58:19.807240 sshd[5918]: Connection closed by 139.178.89.65 port 55428 Sep 10 06:58:19.808214 sshd-session[5915]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:19.816120 systemd[1]: sshd@24-10.244.28.170:22-139.178.89.65:55428.service: Deactivated successfully. Sep 10 06:58:19.819473 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 06:58:19.820438 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Sep 10 06:58:19.827108 systemd-logind[1560]: Removed session 24. Sep 10 06:58:24.969360 systemd[1]: Started sshd@25-10.244.28.170:22-139.178.89.65:59068.service - OpenSSH per-connection server daemon (139.178.89.65:59068). Sep 10 06:58:25.936049 sshd[5930]: Accepted publickey for core from 139.178.89.65 port 59068 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:25.939748 sshd-session[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:25.950704 systemd-logind[1560]: New session 25 of user core. Sep 10 06:58:25.957414 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 06:58:26.855722 sshd[5933]: Connection closed by 139.178.89.65 port 59068 Sep 10 06:58:26.856972 sshd-session[5930]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:26.864764 systemd[1]: sshd@25-10.244.28.170:22-139.178.89.65:59068.service: Deactivated successfully. Sep 10 06:58:26.871433 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 06:58:26.873255 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Sep 10 06:58:26.877747 systemd-logind[1560]: Removed session 25. Sep 10 06:58:30.246180 containerd[1582]: time="2025-09-10T06:58:30.235917738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f414b702af938f4daf2a03795b57359997c240c12bcb3af8494ab8862a19d423\" id:\"da903c9a4719aa0f31a61be2995477cdf997b6f8d60f5149826b7a67add3b640\" pid:5957 exited_at:{seconds:1757487510 nanos:151749755}" Sep 10 06:58:30.577047 containerd[1582]: time="2025-09-10T06:58:30.576933057Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2d1244ea01d514cae452433027cc3680b021b77d3156d4dd8d0dd109af62f223\" id:\"616f62d12f734242eadcd7eb7045cc03e99a3ee601826063f3dd6f5b6447b684\" pid:5979 exited_at:{seconds:1757487510 nanos:575712640}" Sep 10 06:58:32.009109 systemd[1]: Started sshd@26-10.244.28.170:22-139.178.89.65:60098.service - OpenSSH per-connection server daemon (139.178.89.65:60098). Sep 10 06:58:33.042710 sshd[5994]: Accepted publickey for core from 139.178.89.65 port 60098 ssh2: RSA SHA256:kou9ixDn310EeDDEH1fKRbCILRbyVlsoqvVtj8L1aJA Sep 10 06:58:33.045575 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 06:58:33.055488 systemd-logind[1560]: New session 26 of user core. Sep 10 06:58:33.063428 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 06:58:34.136897 sshd[5998]: Connection closed by 139.178.89.65 port 60098 Sep 10 06:58:34.138517 sshd-session[5994]: pam_unix(sshd:session): session closed for user core Sep 10 06:58:34.151255 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Sep 10 06:58:34.152294 systemd[1]: sshd@26-10.244.28.170:22-139.178.89.65:60098.service: Deactivated successfully. Sep 10 06:58:34.157325 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 06:58:34.162093 systemd-logind[1560]: Removed session 26.