Oct 13 06:50:27.393258 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Oct 13 03:31:29 -00 2025 Oct 13 06:50:27.393294 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 06:50:27.393311 kernel: BIOS-provided physical RAM map: Oct 13 06:50:27.393321 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 13 06:50:27.393331 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 13 06:50:27.393340 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 13 06:50:27.393352 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Oct 13 06:50:27.393366 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Oct 13 06:50:27.393379 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 13 06:50:27.393389 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 13 06:50:27.393399 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 06:50:27.393408 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 13 06:50:27.393418 kernel: NX (Execute Disable) protection: active Oct 13 06:50:27.393428 kernel: APIC: Static calls initialized Oct 13 06:50:27.393443 kernel: SMBIOS 2.8 present. Oct 13 06:50:27.393454 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Oct 13 06:50:27.393466 kernel: DMI: Memory slots populated: 1/1 Oct 13 06:50:27.393477 kernel: Hypervisor detected: KVM Oct 13 06:50:27.393488 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 06:50:27.393499 kernel: kvm-clock: using sched offset of 4489562210 cycles Oct 13 06:50:27.393511 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 06:50:27.393523 kernel: tsc: Detected 2294.608 MHz processor Oct 13 06:50:27.393537 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 06:50:27.393550 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 06:50:27.393561 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Oct 13 06:50:27.393573 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 13 06:50:27.393584 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 06:50:27.393595 kernel: Using GB pages for direct mapping Oct 13 06:50:27.393607 kernel: ACPI: Early table checksum verification disabled Oct 13 06:50:27.393621 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 13 06:50:27.393633 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 06:50:27.393644 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 06:50:27.393656 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 06:50:27.393667 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Oct 13 06:50:27.393678 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 06:50:27.393690 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 06:50:27.393701 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 06:50:27.394183 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 06:50:27.394196 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Oct 13 06:50:27.394209 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Oct 13 06:50:27.394227 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Oct 13 06:50:27.394239 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Oct 13 06:50:27.394254 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Oct 13 06:50:27.394266 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Oct 13 06:50:27.394278 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Oct 13 06:50:27.394290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 13 06:50:27.394302 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 13 06:50:27.394314 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Oct 13 06:50:27.394330 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Oct 13 06:50:27.394342 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Oct 13 06:50:27.394354 kernel: Zone ranges: Oct 13 06:50:27.394366 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 06:50:27.394378 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Oct 13 06:50:27.394390 kernel: Normal empty Oct 13 06:50:27.394402 kernel: Device empty Oct 13 06:50:27.394414 kernel: Movable zone start for each node Oct 13 06:50:27.394428 kernel: Early memory node ranges Oct 13 06:50:27.394441 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 13 06:50:27.394452 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Oct 13 06:50:27.394464 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Oct 13 06:50:27.394476 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 06:50:27.394489 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 06:50:27.394501 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Oct 13 06:50:27.394516 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 06:50:27.394528 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 06:50:27.394544 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 06:50:27.394556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 06:50:27.394568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 06:50:27.394580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 06:50:27.394592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 06:50:27.394607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 06:50:27.394619 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 06:50:27.394631 kernel: TSC deadline timer available Oct 13 06:50:27.394643 kernel: CPU topo: Max. logical packages: 16 Oct 13 06:50:27.394655 kernel: CPU topo: Max. logical dies: 16 Oct 13 06:50:27.394667 kernel: CPU topo: Max. dies per package: 1 Oct 13 06:50:27.394679 kernel: CPU topo: Max. threads per core: 1 Oct 13 06:50:27.394691 kernel: CPU topo: Num. cores per package: 1 Oct 13 06:50:27.394705 kernel: CPU topo: Num. threads per package: 1 Oct 13 06:50:27.394717 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Oct 13 06:50:27.394729 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 06:50:27.394741 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 13 06:50:27.394753 kernel: Booting paravirtualized kernel on KVM Oct 13 06:50:27.394765 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 06:50:27.394777 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Oct 13 06:50:27.394792 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Oct 13 06:50:27.394804 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Oct 13 06:50:27.394816 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 13 06:50:27.394828 kernel: kvm-guest: PV spinlocks enabled Oct 13 06:50:27.394840 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 06:50:27.394854 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 06:50:27.394867 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 06:50:27.394882 kernel: random: crng init done Oct 13 06:50:27.394894 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 06:50:27.394906 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 13 06:50:27.394918 kernel: Fallback order for Node 0: 0 Oct 13 06:50:27.394930 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Oct 13 06:50:27.394942 kernel: Policy zone: DMA32 Oct 13 06:50:27.394961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 06:50:27.394976 kernel: software IO TLB: area num 16. Oct 13 06:50:27.394988 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 13 06:50:27.395000 kernel: ftrace: allocating 40210 entries in 158 pages Oct 13 06:50:27.395012 kernel: ftrace: allocated 158 pages with 5 groups Oct 13 06:50:27.395024 kernel: Dynamic Preempt: voluntary Oct 13 06:50:27.395036 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 06:50:27.395050 kernel: rcu: RCU event tracing is enabled. Oct 13 06:50:27.395064 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 13 06:50:27.395077 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 06:50:27.395089 kernel: Rude variant of Tasks RCU enabled. Oct 13 06:50:27.395102 kernel: Tracing variant of Tasks RCU enabled. Oct 13 06:50:27.395114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 06:50:27.395126 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 13 06:50:27.395895 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 13 06:50:27.395912 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 13 06:50:27.395928 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 13 06:50:27.395941 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Oct 13 06:50:27.395960 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 06:50:27.395972 kernel: Console: colour VGA+ 80x25 Oct 13 06:50:27.395994 kernel: printk: legacy console [tty0] enabled Oct 13 06:50:27.396010 kernel: printk: legacy console [ttyS0] enabled Oct 13 06:50:27.396027 kernel: ACPI: Core revision 20240827 Oct 13 06:50:27.396041 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 06:50:27.396054 kernel: x2apic enabled Oct 13 06:50:27.396070 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 06:50:27.396083 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Oct 13 06:50:27.396096 kernel: Calibrating delay loop (skipped) preset value.. 4589.21 BogoMIPS (lpj=2294608) Oct 13 06:50:27.396109 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 06:50:27.396125 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 13 06:50:27.396146 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 13 06:50:27.396159 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 06:50:27.396172 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Oct 13 06:50:27.396184 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Oct 13 06:50:27.396197 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Oct 13 06:50:27.396209 kernel: RETBleed: Mitigation: Enhanced IBRS Oct 13 06:50:27.396222 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 06:50:27.396234 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 06:50:27.396249 kernel: TAA: Mitigation: Clear CPU buffers Oct 13 06:50:27.396262 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 13 06:50:27.396274 kernel: GDS: Unknown: Dependent on hypervisor status Oct 13 06:50:27.396287 kernel: active return thunk: its_return_thunk Oct 13 06:50:27.396300 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 13 06:50:27.396312 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 06:50:27.396325 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 06:50:27.396337 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 06:50:27.396350 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Oct 13 06:50:27.396362 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Oct 13 06:50:27.396378 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Oct 13 06:50:27.396390 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Oct 13 06:50:27.396403 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 06:50:27.396415 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Oct 13 06:50:27.396427 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Oct 13 06:50:27.396440 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Oct 13 06:50:27.396452 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Oct 13 06:50:27.396465 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Oct 13 06:50:27.396477 kernel: Freeing SMP alternatives memory: 32K Oct 13 06:50:27.396490 kernel: pid_max: default: 32768 minimum: 301 Oct 13 06:50:27.396502 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 06:50:27.396517 kernel: landlock: Up and running. Oct 13 06:50:27.396529 kernel: SELinux: Initializing. Oct 13 06:50:27.396542 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 06:50:27.396555 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 13 06:50:27.396567 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Oct 13 06:50:27.396580 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Oct 13 06:50:27.396593 kernel: signal: max sigframe size: 3632 Oct 13 06:50:27.396606 kernel: rcu: Hierarchical SRCU implementation. Oct 13 06:50:27.396619 kernel: rcu: Max phase no-delay instances is 400. Oct 13 06:50:27.396634 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Oct 13 06:50:27.396647 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 13 06:50:27.396660 kernel: smp: Bringing up secondary CPUs ... Oct 13 06:50:27.396673 kernel: smpboot: x86: Booting SMP configuration: Oct 13 06:50:27.396685 kernel: .... node #0, CPUs: #1 Oct 13 06:50:27.396698 kernel: smp: Brought up 1 node, 2 CPUs Oct 13 06:50:27.396711 kernel: smpboot: Total of 2 processors activated (9178.43 BogoMIPS) Oct 13 06:50:27.396727 kernel: Memory: 1926416K/2096616K available (14336K kernel code, 2450K rwdata, 10012K rodata, 24532K init, 1684K bss, 164204K reserved, 0K cma-reserved) Oct 13 06:50:27.396740 kernel: devtmpfs: initialized Oct 13 06:50:27.396753 kernel: x86/mm: Memory block size: 128MB Oct 13 06:50:27.396766 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 06:50:27.396783 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 13 06:50:27.396796 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 06:50:27.396809 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 06:50:27.396824 kernel: audit: initializing netlink subsys (disabled) Oct 13 06:50:27.396837 kernel: audit: type=2000 audit(1760338224.037:1): state=initialized audit_enabled=0 res=1 Oct 13 06:50:27.396850 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 06:50:27.396863 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 06:50:27.396876 kernel: cpuidle: using governor menu Oct 13 06:50:27.396889 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 06:50:27.396901 kernel: dca service started, version 1.12.1 Oct 13 06:50:27.396914 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 13 06:50:27.396930 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 13 06:50:27.396943 kernel: PCI: Using configuration type 1 for base access Oct 13 06:50:27.396962 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 06:50:27.396975 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 06:50:27.396987 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 06:50:27.397000 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 06:50:27.397013 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 06:50:27.397029 kernel: ACPI: Added _OSI(Module Device) Oct 13 06:50:27.397041 kernel: ACPI: Added _OSI(Processor Device) Oct 13 06:50:27.397054 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 06:50:27.397067 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 06:50:27.397080 kernel: ACPI: Interpreter enabled Oct 13 06:50:27.397093 kernel: ACPI: PM: (supports S0 S5) Oct 13 06:50:27.397105 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 06:50:27.397121 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 06:50:27.397134 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 06:50:27.397155 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 06:50:27.397168 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 06:50:27.398513 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 06:50:27.398648 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 13 06:50:27.398783 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 13 06:50:27.398812 kernel: PCI host bridge to bus 0000:00 Oct 13 06:50:27.398959 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 06:50:27.399066 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 06:50:27.399191 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 06:50:27.399294 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Oct 13 06:50:27.399400 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 13 06:50:27.399513 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Oct 13 06:50:27.399640 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 06:50:27.399783 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 06:50:27.399918 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Oct 13 06:50:27.400059 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Oct 13 06:50:27.400196 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Oct 13 06:50:27.400327 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Oct 13 06:50:27.400451 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 06:50:27.400585 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.400712 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Oct 13 06:50:27.400850 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 13 06:50:27.400998 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 13 06:50:27.401128 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 13 06:50:27.402769 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.402925 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Oct 13 06:50:27.403319 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 13 06:50:27.403452 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 13 06:50:27.403579 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 13 06:50:27.403712 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.403837 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Oct 13 06:50:27.403976 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 13 06:50:27.404107 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 13 06:50:27.404242 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 13 06:50:27.404374 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.404516 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Oct 13 06:50:27.404641 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 13 06:50:27.404764 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 13 06:50:27.404892 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 13 06:50:27.405031 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.405453 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Oct 13 06:50:27.405602 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 13 06:50:27.409677 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 13 06:50:27.409865 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 13 06:50:27.410036 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.410223 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Oct 13 06:50:27.410352 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 13 06:50:27.410479 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 13 06:50:27.410606 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 13 06:50:27.410749 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.410880 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Oct 13 06:50:27.411014 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 13 06:50:27.412607 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 13 06:50:27.412779 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 13 06:50:27.412922 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Oct 13 06:50:27.413077 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Oct 13 06:50:27.413242 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 13 06:50:27.413399 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 13 06:50:27.413527 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 13 06:50:27.413680 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 06:50:27.413808 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Oct 13 06:50:27.413938 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Oct 13 06:50:27.414072 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Oct 13 06:50:27.414213 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Oct 13 06:50:27.414346 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 06:50:27.414492 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Oct 13 06:50:27.414616 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Oct 13 06:50:27.414746 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Oct 13 06:50:27.414880 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 06:50:27.415014 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 06:50:27.416000 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 06:50:27.416152 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Oct 13 06:50:27.416288 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Oct 13 06:50:27.416425 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 06:50:27.416551 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 13 06:50:27.416688 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Oct 13 06:50:27.416817 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Oct 13 06:50:27.416944 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 13 06:50:27.417085 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 13 06:50:27.419987 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 13 06:50:27.420208 kernel: pci_bus 0000:02: extended config space not accessible Oct 13 06:50:27.420364 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Oct 13 06:50:27.420519 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Oct 13 06:50:27.420661 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 13 06:50:27.420801 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Oct 13 06:50:27.420932 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Oct 13 06:50:27.421092 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 13 06:50:27.421263 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Oct 13 06:50:27.421401 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Oct 13 06:50:27.421536 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 13 06:50:27.421666 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 13 06:50:27.422563 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 13 06:50:27.422739 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 13 06:50:27.422871 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 13 06:50:27.423015 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 13 06:50:27.423038 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 06:50:27.423050 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 06:50:27.423061 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 06:50:27.423072 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 06:50:27.423083 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 06:50:27.423095 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 06:50:27.423108 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 06:50:27.423119 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 06:50:27.423130 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 06:50:27.424907 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 06:50:27.424925 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 06:50:27.424936 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 06:50:27.424955 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 06:50:27.424966 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 06:50:27.424984 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 06:50:27.424995 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 06:50:27.425006 kernel: iommu: Default domain type: Translated Oct 13 06:50:27.425017 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 06:50:27.425028 kernel: PCI: Using ACPI for IRQ routing Oct 13 06:50:27.425039 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 06:50:27.425050 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 13 06:50:27.425064 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Oct 13 06:50:27.425243 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 06:50:27.425375 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 06:50:27.425501 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 06:50:27.425516 kernel: vgaarb: loaded Oct 13 06:50:27.425527 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 06:50:27.425543 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 06:50:27.425555 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 06:50:27.425566 kernel: pnp: PnP ACPI init Oct 13 06:50:27.425738 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 13 06:50:27.425755 kernel: pnp: PnP ACPI: found 5 devices Oct 13 06:50:27.425766 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 06:50:27.425777 kernel: NET: Registered PF_INET protocol family Oct 13 06:50:27.425792 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 06:50:27.425803 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 13 06:50:27.425814 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 06:50:27.425825 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 13 06:50:27.425836 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 13 06:50:27.425847 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 13 06:50:27.425858 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 06:50:27.425871 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 13 06:50:27.425882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 06:50:27.425893 kernel: NET: Registered PF_XDP protocol family Oct 13 06:50:27.426033 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Oct 13 06:50:27.426179 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 13 06:50:27.426310 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 13 06:50:27.426444 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 13 06:50:27.426573 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 13 06:50:27.426703 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 13 06:50:27.426871 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 13 06:50:27.427038 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 13 06:50:27.427315 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Oct 13 06:50:27.427484 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Oct 13 06:50:27.427620 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Oct 13 06:50:27.427748 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Oct 13 06:50:27.427880 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Oct 13 06:50:27.428019 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Oct 13 06:50:27.428159 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Oct 13 06:50:27.428327 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Oct 13 06:50:27.428454 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 13 06:50:27.428591 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 13 06:50:27.428721 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 13 06:50:27.428848 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 13 06:50:27.428981 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 13 06:50:27.429135 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 13 06:50:27.431429 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 13 06:50:27.431571 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 13 06:50:27.431702 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 13 06:50:27.431831 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 13 06:50:27.431978 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 13 06:50:27.432105 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 13 06:50:27.432265 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 13 06:50:27.432390 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 13 06:50:27.432519 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 13 06:50:27.432644 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 13 06:50:27.432770 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 13 06:50:27.432896 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 13 06:50:27.433040 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 13 06:50:27.433215 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 13 06:50:27.433346 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 13 06:50:27.433472 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 13 06:50:27.433599 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 13 06:50:27.433731 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 13 06:50:27.433861 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 13 06:50:27.434000 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 13 06:50:27.434136 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 13 06:50:27.434289 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 13 06:50:27.434464 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 13 06:50:27.434589 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 13 06:50:27.434714 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 13 06:50:27.434885 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 13 06:50:27.435028 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 13 06:50:27.436211 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 13 06:50:27.436378 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 06:50:27.436498 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 06:50:27.436615 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 06:50:27.436730 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Oct 13 06:50:27.436850 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 13 06:50:27.436976 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Oct 13 06:50:27.437111 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 13 06:50:27.437247 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Oct 13 06:50:27.437365 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 13 06:50:27.437499 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 13 06:50:27.437630 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Oct 13 06:50:27.437759 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 13 06:50:27.437876 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 13 06:50:27.438015 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Oct 13 06:50:27.438134 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 13 06:50:27.439404 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 13 06:50:27.439538 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Oct 13 06:50:27.439658 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 13 06:50:27.439786 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 13 06:50:27.439912 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Oct 13 06:50:27.440042 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 13 06:50:27.440714 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 13 06:50:27.440862 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Oct 13 06:50:27.440997 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 13 06:50:27.441116 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 13 06:50:27.443599 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Oct 13 06:50:27.443752 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Oct 13 06:50:27.443875 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 13 06:50:27.444013 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Oct 13 06:50:27.444134 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 13 06:50:27.444280 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 13 06:50:27.444296 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 06:50:27.444315 kernel: PCI: CLS 0 bytes, default 64 Oct 13 06:50:27.444326 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 13 06:50:27.444339 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Oct 13 06:50:27.444351 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 13 06:50:27.444363 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x21134f58f0d, max_idle_ns: 440795217993 ns Oct 13 06:50:27.444374 kernel: Initialise system trusted keyrings Oct 13 06:50:27.444386 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 13 06:50:27.444402 kernel: Key type asymmetric registered Oct 13 06:50:27.444414 kernel: Asymmetric key parser 'x509' registered Oct 13 06:50:27.444426 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 06:50:27.444438 kernel: io scheduler mq-deadline registered Oct 13 06:50:27.444449 kernel: io scheduler kyber registered Oct 13 06:50:27.444460 kernel: io scheduler bfq registered Oct 13 06:50:27.444605 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 13 06:50:27.444741 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 13 06:50:27.444869 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.445011 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 13 06:50:27.445206 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 13 06:50:27.445337 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.445471 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 13 06:50:27.445597 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 13 06:50:27.445721 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.445849 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 13 06:50:27.445985 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 13 06:50:27.446115 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.446265 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 13 06:50:27.446395 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 13 06:50:27.446521 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.446648 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 13 06:50:27.446776 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 13 06:50:27.446900 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.447039 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 13 06:50:27.448229 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 13 06:50:27.448387 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.448526 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 13 06:50:27.448654 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 13 06:50:27.448782 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 13 06:50:27.448797 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 06:50:27.448811 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 06:50:27.448822 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 06:50:27.448837 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 06:50:27.448849 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 06:50:27.448860 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 06:50:27.448872 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 06:50:27.448884 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 06:50:27.449026 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 13 06:50:27.449042 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 06:50:27.449174 kernel: rtc_cmos 00:03: registered as rtc0 Oct 13 06:50:27.449294 kernel: rtc_cmos 00:03: setting system clock to 2025-10-13T06:50:25 UTC (1760338225) Oct 13 06:50:27.449412 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 13 06:50:27.449427 kernel: intel_pstate: CPU model not supported Oct 13 06:50:27.449439 kernel: NET: Registered PF_INET6 protocol family Oct 13 06:50:27.449450 kernel: Segment Routing with IPv6 Oct 13 06:50:27.449462 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 06:50:27.449476 kernel: NET: Registered PF_PACKET protocol family Oct 13 06:50:27.449488 kernel: Key type dns_resolver registered Oct 13 06:50:27.449499 kernel: IPI shorthand broadcast: enabled Oct 13 06:50:27.449510 kernel: sched_clock: Marking stable (1409002350, 121516229)->(1762597678, -232079099) Oct 13 06:50:27.449522 kernel: registered taskstats version 1 Oct 13 06:50:27.449534 kernel: Loading compiled-in X.509 certificates Oct 13 06:50:27.449545 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: 9f1258ccc510afd4f2a37f4774c4b2e958d823b7' Oct 13 06:50:27.449559 kernel: Demotion targets for Node 0: null Oct 13 06:50:27.449570 kernel: Key type .fscrypt registered Oct 13 06:50:27.449581 kernel: Key type fscrypt-provisioning registered Oct 13 06:50:27.449593 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 06:50:27.449605 kernel: ima: Allocated hash algorithm: sha1 Oct 13 06:50:27.449616 kernel: ima: No architecture policies found Oct 13 06:50:27.449628 kernel: clk: Disabling unused clocks Oct 13 06:50:27.449642 kernel: Freeing unused kernel image (initmem) memory: 24532K Oct 13 06:50:27.449654 kernel: Write protecting the kernel read-only data: 24576k Oct 13 06:50:27.449665 kernel: Freeing unused kernel image (rodata/data gap) memory: 228K Oct 13 06:50:27.449677 kernel: Run /init as init process Oct 13 06:50:27.449688 kernel: with arguments: Oct 13 06:50:27.449700 kernel: /init Oct 13 06:50:27.449713 kernel: with environment: Oct 13 06:50:27.449727 kernel: HOME=/ Oct 13 06:50:27.449739 kernel: TERM=linux Oct 13 06:50:27.449750 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 06:50:27.449761 kernel: ACPI: bus type USB registered Oct 13 06:50:27.449773 kernel: usbcore: registered new interface driver usbfs Oct 13 06:50:27.449784 kernel: usbcore: registered new interface driver hub Oct 13 06:50:27.449795 kernel: usbcore: registered new device driver usb Oct 13 06:50:27.449932 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 13 06:50:27.450069 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Oct 13 06:50:27.450791 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 13 06:50:27.450937 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 13 06:50:27.451077 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Oct 13 06:50:27.451207 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Oct 13 06:50:27.451455 kernel: hub 1-0:1.0: USB hub found Oct 13 06:50:27.451595 kernel: hub 1-0:1.0: 4 ports detected Oct 13 06:50:27.451753 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 13 06:50:27.451898 kernel: hub 2-0:1.0: USB hub found Oct 13 06:50:27.452061 kernel: hub 2-0:1.0: 4 ports detected Oct 13 06:50:27.452076 kernel: SCSI subsystem initialized Oct 13 06:50:27.452092 kernel: libata version 3.00 loaded. Oct 13 06:50:27.452248 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 06:50:27.452265 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 06:50:27.452389 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 06:50:27.452513 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 06:50:27.452640 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 06:50:27.452810 kernel: scsi host0: ahci Oct 13 06:50:27.452962 kernel: scsi host1: ahci Oct 13 06:50:27.453096 kernel: scsi host2: ahci Oct 13 06:50:27.453261 kernel: scsi host3: ahci Oct 13 06:50:27.453400 kernel: scsi host4: ahci Oct 13 06:50:27.453546 kernel: scsi host5: ahci Oct 13 06:50:27.453562 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Oct 13 06:50:27.453575 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Oct 13 06:50:27.453587 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Oct 13 06:50:27.453599 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Oct 13 06:50:27.453611 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Oct 13 06:50:27.453626 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Oct 13 06:50:27.453784 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 13 06:50:27.453801 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 06:50:27.453813 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 06:50:27.453824 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 06:50:27.453835 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 06:50:27.453847 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 06:50:27.453861 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 13 06:50:27.454005 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Oct 13 06:50:27.454130 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 13 06:50:27.454157 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 13 06:50:27.454169 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 06:50:27.454181 kernel: GPT:25804799 != 125829119 Oct 13 06:50:27.454196 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 06:50:27.454207 kernel: GPT:25804799 != 125829119 Oct 13 06:50:27.454218 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 06:50:27.454229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 06:50:27.454241 kernel: usbcore: registered new interface driver usbhid Oct 13 06:50:27.454252 kernel: usbhid: USB HID core driver Oct 13 06:50:27.454265 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454279 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 13 06:50:27.454466 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Oct 13 06:50:27.454483 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 06:50:27.454495 kernel: device-mapper: uevent: version 1.0.3 Oct 13 06:50:27.454508 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 06:50:27.454520 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 13 06:50:27.454535 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454546 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454557 kernel: raid6: avx512x4 gen() 17718 MB/s Oct 13 06:50:27.454568 kernel: raid6: avx512x2 gen() 17504 MB/s Oct 13 06:50:27.454580 kernel: raid6: avx512x1 gen() 17554 MB/s Oct 13 06:50:27.454591 kernel: raid6: avx2x4 gen() 17452 MB/s Oct 13 06:50:27.454602 kernel: raid6: avx2x2 gen() 17487 MB/s Oct 13 06:50:27.454615 kernel: raid6: avx2x1 gen() 13397 MB/s Oct 13 06:50:27.454627 kernel: raid6: using algorithm avx512x4 gen() 17718 MB/s Oct 13 06:50:27.454638 kernel: raid6: .... xor() 7537 MB/s, rmw enabled Oct 13 06:50:27.454649 kernel: raid6: using avx512x2 recovery algorithm Oct 13 06:50:27.454661 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454672 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454683 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454695 kernel: xor: automatically using best checksumming function avx Oct 13 06:50:27.454709 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454719 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 06:50:27.454731 kernel: BTRFS: device fsid e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (208) Oct 13 06:50:27.454743 kernel: BTRFS info (device dm-0): first mount of filesystem e87b15e9-127c-40e2-bae7-d0ea05b4f2e3 Oct 13 06:50:27.454754 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 06:50:27.454766 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 06:50:27.454777 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 06:50:27.454791 kernel: Invalid ELF header magic: != \u007fELF Oct 13 06:50:27.454802 kernel: loop: module loaded Oct 13 06:50:27.454813 kernel: loop0: detected capacity change from 0 to 100048 Oct 13 06:50:27.454824 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 06:50:27.454839 systemd[1]: Successfully made /usr/ read-only. Oct 13 06:50:27.454854 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 06:50:27.454870 systemd[1]: Detected virtualization kvm. Oct 13 06:50:27.454881 systemd[1]: Detected architecture x86-64. Oct 13 06:50:27.454893 systemd[1]: Running in initrd. Oct 13 06:50:27.454905 systemd[1]: No hostname configured, using default hostname. Oct 13 06:50:27.454917 systemd[1]: Hostname set to . Oct 13 06:50:27.454928 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 06:50:27.454941 systemd[1]: Queued start job for default target initrd.target. Oct 13 06:50:27.454974 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 06:50:27.454985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 06:50:27.454997 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 06:50:27.455009 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 06:50:27.455022 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 06:50:27.455035 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 06:50:27.455049 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 06:50:27.455062 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 06:50:27.455074 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 06:50:27.455086 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 06:50:27.455098 systemd[1]: Reached target paths.target - Path Units. Oct 13 06:50:27.455110 systemd[1]: Reached target slices.target - Slice Units. Oct 13 06:50:27.455125 systemd[1]: Reached target swap.target - Swaps. Oct 13 06:50:27.455137 systemd[1]: Reached target timers.target - Timer Units. Oct 13 06:50:27.455169 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 06:50:27.455181 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 06:50:27.455193 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 06:50:27.455205 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 06:50:27.455217 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 06:50:27.455232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 06:50:27.455243 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 06:50:27.455255 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 06:50:27.455267 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 06:50:27.455279 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 06:50:27.455291 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 06:50:27.455303 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 06:50:27.455318 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 06:50:27.455330 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 06:50:27.455343 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 06:50:27.455355 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 06:50:27.455367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 06:50:27.455382 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 06:50:27.455394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 06:50:27.455406 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 06:50:27.455418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 06:50:27.455466 systemd-journald[343]: Collecting audit messages is disabled. Oct 13 06:50:27.455496 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 06:50:27.455508 kernel: Bridge firewalling registered Oct 13 06:50:27.455520 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 06:50:27.455535 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 06:50:27.455547 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 06:50:27.455559 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 06:50:27.455572 systemd-journald[343]: Journal started Oct 13 06:50:27.455597 systemd-journald[343]: Runtime Journal (/run/log/journal/30136c07b1534bcb831a27871b8f718b) is 4.7M, max 38.2M, 33.4M free. Oct 13 06:50:27.406214 systemd-modules-load[346]: Inserted module 'br_netfilter' Oct 13 06:50:27.487975 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 06:50:27.491068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 06:50:27.492411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 06:50:27.500810 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 06:50:27.504357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 06:50:27.505516 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 06:50:27.509271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 06:50:27.527611 systemd-tmpfiles[371]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 06:50:27.533049 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 06:50:27.545304 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 06:50:27.549301 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 06:50:27.573295 dracut-cmdline[386]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=4919840803704517a91afcb9d57d99e9935244ff049349c54216d9a31bc1da5d Oct 13 06:50:27.577686 systemd-resolved[370]: Positive Trust Anchors: Oct 13 06:50:27.578256 systemd-resolved[370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 06:50:27.578262 systemd-resolved[370]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 06:50:27.578304 systemd-resolved[370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 06:50:27.613607 systemd-resolved[370]: Defaulting to hostname 'linux'. Oct 13 06:50:27.615481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 06:50:27.616472 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 06:50:27.690180 kernel: Loading iSCSI transport class v2.0-870. Oct 13 06:50:27.705210 kernel: iscsi: registered transport (tcp) Oct 13 06:50:27.732359 kernel: iscsi: registered transport (qla4xxx) Oct 13 06:50:27.732466 kernel: QLogic iSCSI HBA Driver Oct 13 06:50:27.764233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 06:50:27.784576 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 06:50:27.787716 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 06:50:27.850299 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 06:50:27.855623 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 06:50:27.857675 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 06:50:27.898210 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 06:50:27.901323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 06:50:27.930595 systemd-udevd[623]: Using default interface naming scheme 'v257'. Oct 13 06:50:27.942708 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 06:50:27.945448 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 06:50:27.975541 dracut-pre-trigger[693]: rd.md=0: removing MD RAID activation Oct 13 06:50:27.977183 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 06:50:27.979290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 06:50:28.011572 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 06:50:28.014298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 06:50:28.032257 systemd-networkd[733]: lo: Link UP Oct 13 06:50:28.032902 systemd-networkd[733]: lo: Gained carrier Oct 13 06:50:28.033519 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 06:50:28.034770 systemd[1]: Reached target network.target - Network. Oct 13 06:50:28.110249 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 06:50:28.114239 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 06:50:28.205389 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 06:50:28.233165 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 06:50:28.250959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 06:50:28.263992 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 06:50:28.265996 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 06:50:28.280366 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 06:50:28.292811 disk-uuid[790]: Primary Header is updated. Oct 13 06:50:28.292811 disk-uuid[790]: Secondary Entries is updated. Oct 13 06:50:28.292811 disk-uuid[790]: Secondary Header is updated. Oct 13 06:50:28.307527 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Oct 13 06:50:28.320189 kernel: AES CTR mode by8 optimization enabled Oct 13 06:50:28.347605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 06:50:28.347741 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 06:50:28.349492 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 06:50:28.354120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 06:50:28.363288 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 06:50:28.363297 systemd-networkd[733]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 06:50:28.364606 systemd-networkd[733]: eth0: Link UP Oct 13 06:50:28.365292 systemd-networkd[733]: eth0: Gained carrier Oct 13 06:50:28.365304 systemd-networkd[733]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 06:50:28.377239 systemd-networkd[733]: eth0: DHCPv4 address 10.244.93.206/30, gateway 10.244.93.205 acquired from 10.244.93.205 Oct 13 06:50:28.431927 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 06:50:28.440361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 06:50:28.445124 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 06:50:28.446616 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 06:50:28.447945 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 06:50:28.450749 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 06:50:28.504205 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 06:50:29.348596 disk-uuid[792]: Warning: The kernel is still using the old partition table. Oct 13 06:50:29.348596 disk-uuid[792]: The new table will be used at the next reboot or after you Oct 13 06:50:29.348596 disk-uuid[792]: run partprobe(8) or kpartx(8) Oct 13 06:50:29.348596 disk-uuid[792]: The operation has completed successfully. Oct 13 06:50:29.360953 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 06:50:29.361096 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 06:50:29.364534 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 06:50:29.408366 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (878) Oct 13 06:50:29.408478 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 06:50:29.410465 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 06:50:29.414722 kernel: BTRFS info (device vda6): turning on async discard Oct 13 06:50:29.414803 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 06:50:29.425296 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 06:50:29.427021 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 06:50:29.428566 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 06:50:29.634193 ignition[897]: Ignition 2.22.0 Oct 13 06:50:29.634206 ignition[897]: Stage: fetch-offline Oct 13 06:50:29.634255 ignition[897]: no configs at "/usr/lib/ignition/base.d" Oct 13 06:50:29.638006 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 06:50:29.634266 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 13 06:50:29.634960 ignition[897]: parsed url from cmdline: "" Oct 13 06:50:29.640623 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 13 06:50:29.634964 ignition[897]: no config URL provided Oct 13 06:50:29.634969 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 06:50:29.634978 ignition[897]: no config at "/usr/lib/ignition/user.ign" Oct 13 06:50:29.634983 ignition[897]: failed to fetch config: resource requires networking Oct 13 06:50:29.635358 ignition[897]: Ignition finished successfully Oct 13 06:50:29.675340 ignition[903]: Ignition 2.22.0 Oct 13 06:50:29.675354 ignition[903]: Stage: fetch Oct 13 06:50:29.675495 ignition[903]: no configs at "/usr/lib/ignition/base.d" Oct 13 06:50:29.675503 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 13 06:50:29.675601 ignition[903]: parsed url from cmdline: "" Oct 13 06:50:29.675605 ignition[903]: no config URL provided Oct 13 06:50:29.675611 ignition[903]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 06:50:29.675619 ignition[903]: no config at "/usr/lib/ignition/user.ign" Oct 13 06:50:29.675782 ignition[903]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 13 06:50:29.676183 ignition[903]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 13 06:50:29.676228 ignition[903]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 13 06:50:29.691583 ignition[903]: GET result: OK Oct 13 06:50:29.691851 ignition[903]: parsing config with SHA512: c8811f0e10aaee2a0fc355e7d214a885331108f3ce0216dfb5ca3cfff69ad022896dbeb78ca6bd827bc1d6f7b5a7b022c9bd6bbaacf7f15b598f6a83813fa5bb Oct 13 06:50:29.700542 unknown[903]: fetched base config from "system" Oct 13 06:50:29.701082 ignition[903]: fetch: fetch complete Oct 13 06:50:29.700558 unknown[903]: fetched base config from "system" Oct 13 06:50:29.701089 ignition[903]: fetch: fetch passed Oct 13 06:50:29.700565 unknown[903]: fetched user config from "openstack" Oct 13 06:50:29.701185 ignition[903]: Ignition finished successfully Oct 13 06:50:29.702348 systemd-networkd[733]: eth0: Gained IPv6LL Oct 13 06:50:29.711777 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 13 06:50:29.716100 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 06:50:29.769033 ignition[910]: Ignition 2.22.0 Oct 13 06:50:29.769060 ignition[910]: Stage: kargs Oct 13 06:50:29.769275 ignition[910]: no configs at "/usr/lib/ignition/base.d" Oct 13 06:50:29.769284 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 13 06:50:29.772572 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 06:50:29.770764 ignition[910]: kargs: kargs passed Oct 13 06:50:29.775289 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 06:50:29.770814 ignition[910]: Ignition finished successfully Oct 13 06:50:29.810429 ignition[916]: Ignition 2.22.0 Oct 13 06:50:29.810454 ignition[916]: Stage: disks Oct 13 06:50:29.810652 ignition[916]: no configs at "/usr/lib/ignition/base.d" Oct 13 06:50:29.810662 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 13 06:50:29.813574 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 06:50:29.811745 ignition[916]: disks: disks passed Oct 13 06:50:29.815300 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 06:50:29.811792 ignition[916]: Ignition finished successfully Oct 13 06:50:29.816079 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 06:50:29.816942 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 06:50:29.817616 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 06:50:29.818495 systemd[1]: Reached target basic.target - Basic System. Oct 13 06:50:29.820237 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 06:50:29.865063 systemd-fsck[925]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Oct 13 06:50:29.869366 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 06:50:29.872606 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 06:50:30.033223 kernel: EXT4-fs (vda9): mounted filesystem c7d6ef00-6dd1-40b4-91f2-c4c5965e3cac r/w with ordered data mode. Quota mode: none. Oct 13 06:50:30.034318 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 06:50:30.035596 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 06:50:30.038001 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 06:50:30.040053 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 06:50:30.043042 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 06:50:30.048299 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 13 06:50:30.049360 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 06:50:30.050135 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 06:50:30.055162 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (934) Oct 13 06:50:30.055207 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 06:50:30.057203 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 06:50:30.062748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 06:50:30.067261 kernel: BTRFS info (device vda6): turning on async discard Oct 13 06:50:30.067292 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 06:50:30.066059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 06:50:30.070340 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 06:50:30.145173 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:30.152716 initrd-setup-root[962]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 06:50:30.162255 initrd-setup-root[969]: cut: /sysroot/etc/group: No such file or directory Oct 13 06:50:30.170986 initrd-setup-root[976]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 06:50:30.179325 initrd-setup-root[983]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 06:50:30.310114 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 06:50:30.314366 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 06:50:30.317380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 06:50:30.361164 kernel: BTRFS info (device vda6): last unmount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 06:50:30.367748 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 06:50:30.397169 ignition[1053]: INFO : Ignition 2.22.0 Oct 13 06:50:30.397169 ignition[1053]: INFO : Stage: mount Oct 13 06:50:30.397169 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 06:50:30.397169 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 13 06:50:30.400100 ignition[1053]: INFO : mount: mount passed Oct 13 06:50:30.400100 ignition[1053]: INFO : Ignition finished successfully Oct 13 06:50:30.398713 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 06:50:30.400072 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 06:50:31.173320 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:31.213618 systemd-networkd[733]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1773:24:19ff:fef4:5dce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1773:24:19ff:fef4:5dce/64 assigned by NDisc. Oct 13 06:50:31.213641 systemd-networkd[733]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 13 06:50:33.190210 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:37.201205 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:37.210522 coreos-metadata[936]: Oct 13 06:50:37.210 WARN failed to locate config-drive, using the metadata service API instead Oct 13 06:50:37.235190 coreos-metadata[936]: Oct 13 06:50:37.235 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 13 06:50:37.248787 coreos-metadata[936]: Oct 13 06:50:37.248 INFO Fetch successful Oct 13 06:50:37.251195 coreos-metadata[936]: Oct 13 06:50:37.250 INFO wrote hostname srv-ntuey.gb1.brightbox.com to /sysroot/etc/hostname Oct 13 06:50:37.252343 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 13 06:50:37.252623 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 13 06:50:37.259865 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 06:50:37.289178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 06:50:37.317233 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1068) Oct 13 06:50:37.321171 kernel: BTRFS info (device vda6): first mount of filesystem 56bbaf92-79f4-4948-a1fd-5992c383eba8 Oct 13 06:50:37.321257 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 06:50:37.327652 kernel: BTRFS info (device vda6): turning on async discard Oct 13 06:50:37.327771 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 06:50:37.332545 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 06:50:37.389349 ignition[1086]: INFO : Ignition 2.22.0 Oct 13 06:50:37.390315 ignition[1086]: INFO : Stage: files Oct 13 06:50:37.390315 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 06:50:37.390315 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 13 06:50:37.392963 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Oct 13 06:50:37.396264 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 06:50:37.396264 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 06:50:37.404897 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 06:50:37.405727 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 06:50:37.406648 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 06:50:37.406456 unknown[1086]: wrote ssh authorized keys file for user: core Oct 13 06:50:37.408888 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 06:50:37.409957 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 06:50:37.680856 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 06:50:37.973135 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 06:50:37.973135 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 06:50:37.984349 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 06:50:37.993983 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 06:50:37.993983 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 06:50:37.993983 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 13 06:50:38.363677 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 06:50:39.485553 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 13 06:50:39.489359 ignition[1086]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 06:50:39.489359 ignition[1086]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 06:50:39.493552 ignition[1086]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 06:50:39.493552 ignition[1086]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 06:50:39.493552 ignition[1086]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 13 06:50:39.493552 ignition[1086]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 06:50:39.493552 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 06:50:39.496831 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 06:50:39.496831 ignition[1086]: INFO : files: files passed Oct 13 06:50:39.496831 ignition[1086]: INFO : Ignition finished successfully Oct 13 06:50:39.497674 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 06:50:39.500675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 06:50:39.501994 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 06:50:39.527553 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 06:50:39.527712 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 06:50:39.541670 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 06:50:39.541670 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 06:50:39.544492 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 06:50:39.545909 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 06:50:39.546646 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 06:50:39.549036 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 06:50:39.605586 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 06:50:39.605769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 06:50:39.607210 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 06:50:39.608693 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 06:50:39.610895 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 06:50:39.613188 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 06:50:39.648078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 06:50:39.650700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 06:50:39.680802 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 13 06:50:39.681055 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 06:50:39.681946 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 06:50:39.683348 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 06:50:39.684591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 06:50:39.684856 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 06:50:39.686022 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 06:50:39.686811 systemd[1]: Stopped target basic.target - Basic System. Oct 13 06:50:39.687538 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 06:50:39.688229 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 06:50:39.688879 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 06:50:39.689641 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 06:50:39.690367 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 06:50:39.691115 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 06:50:39.691890 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 06:50:39.692588 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 06:50:39.693391 systemd[1]: Stopped target swap.target - Swaps. Oct 13 06:50:39.694114 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 06:50:39.694281 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 06:50:39.695108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 06:50:39.695938 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 06:50:39.696732 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 06:50:39.696924 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 06:50:39.697575 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 06:50:39.697715 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 06:50:39.698650 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 06:50:39.698806 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 06:50:39.702422 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 06:50:39.702524 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 06:50:39.705241 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 06:50:39.707380 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 06:50:39.707792 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 06:50:39.707946 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 06:50:39.709538 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 06:50:39.709677 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 06:50:39.711300 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 06:50:39.711411 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 06:50:39.717989 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 06:50:39.720705 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 06:50:39.732183 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 06:50:39.748027 ignition[1140]: INFO : Ignition 2.22.0 Oct 13 06:50:39.748837 ignition[1140]: INFO : Stage: umount Oct 13 06:50:39.750400 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 06:50:39.750400 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 13 06:50:39.751860 ignition[1140]: INFO : umount: umount passed Oct 13 06:50:39.752331 ignition[1140]: INFO : Ignition finished successfully Oct 13 06:50:39.755228 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 06:50:39.755402 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 06:50:39.756784 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 06:50:39.756881 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 06:50:39.757721 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 06:50:39.757772 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 06:50:39.758443 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 13 06:50:39.758488 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 13 06:50:39.760002 systemd[1]: Stopped target network.target - Network. Oct 13 06:50:39.760643 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 06:50:39.760691 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 06:50:39.761406 systemd[1]: Stopped target paths.target - Path Units. Oct 13 06:50:39.762687 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 06:50:39.764067 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 06:50:39.765037 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 06:50:39.765436 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 06:50:39.766182 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 06:50:39.766227 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 06:50:39.767688 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 06:50:39.767721 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 06:50:39.768390 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 06:50:39.768437 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 06:50:39.769128 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 06:50:39.769194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 06:50:39.770419 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 06:50:39.771736 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 06:50:39.784067 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 06:50:39.784216 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 06:50:39.787438 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 06:50:39.787571 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 06:50:39.790932 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 06:50:39.791716 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 06:50:39.791778 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 06:50:39.795830 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 06:50:39.797318 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 06:50:39.797378 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 06:50:39.801852 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 06:50:39.801913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 06:50:39.802314 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 06:50:39.802355 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 06:50:39.803916 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 06:50:39.806108 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 06:50:39.809274 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 06:50:39.810263 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 06:50:39.810361 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 06:50:39.817842 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 06:50:39.818165 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 06:50:39.819747 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 06:50:39.819907 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 06:50:39.821703 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 06:50:39.821787 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 06:50:39.822661 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 06:50:39.822769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 06:50:39.827129 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 06:50:39.827261 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 06:50:39.830349 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 06:50:39.830411 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 06:50:39.835886 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 06:50:39.837439 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 06:50:39.837568 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 06:50:39.839121 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 06:50:39.839259 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 06:50:39.840442 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 06:50:39.840541 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 06:50:39.841875 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 06:50:39.841964 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 06:50:39.843196 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 06:50:39.843287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 06:50:39.859491 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 06:50:39.862555 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 06:50:39.864401 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 06:50:39.864522 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 06:50:39.866183 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 06:50:39.867406 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 06:50:39.895087 systemd[1]: Switching root. Oct 13 06:50:39.943457 systemd-journald[343]: Journal stopped Oct 13 06:50:41.027581 systemd-journald[343]: Received SIGTERM from PID 1 (systemd). Oct 13 06:50:41.027754 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 06:50:41.027790 kernel: SELinux: policy capability open_perms=1 Oct 13 06:50:41.027805 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 06:50:41.027824 kernel: SELinux: policy capability always_check_network=0 Oct 13 06:50:41.027848 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 06:50:41.027868 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 06:50:41.027887 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 06:50:41.027901 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 06:50:41.027926 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 06:50:41.027940 kernel: audit: type=1403 audit(1760338240.118:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 06:50:41.027972 systemd[1]: Successfully loaded SELinux policy in 80.996ms. Oct 13 06:50:41.027996 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.450ms. Oct 13 06:50:41.028013 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 06:50:41.028033 systemd[1]: Detected virtualization kvm. Oct 13 06:50:41.028051 systemd[1]: Detected architecture x86-64. Oct 13 06:50:41.028072 systemd[1]: Detected first boot. Oct 13 06:50:41.028092 systemd[1]: Hostname set to . Oct 13 06:50:41.028107 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 13 06:50:41.028122 zram_generator::config[1183]: No configuration found. Oct 13 06:50:41.029994 kernel: Guest personality initialized and is inactive Oct 13 06:50:41.030020 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 06:50:41.030034 kernel: Initialized host personality Oct 13 06:50:41.030048 kernel: NET: Registered PF_VSOCK protocol family Oct 13 06:50:41.030064 systemd[1]: Populated /etc with preset unit settings. Oct 13 06:50:41.030081 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 06:50:41.030102 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 06:50:41.030124 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 06:50:41.030159 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 06:50:41.030181 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 06:50:41.030210 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 06:50:41.030232 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 06:50:41.030266 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 06:50:41.030288 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 06:50:41.030304 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 06:50:41.030329 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 06:50:41.030350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 06:50:41.030367 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 06:50:41.030384 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 06:50:41.030404 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 06:50:41.030422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 06:50:41.030437 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 06:50:41.030453 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 06:50:41.030474 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 06:50:41.030499 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 06:50:41.030515 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 06:50:41.030533 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 06:50:41.030548 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 06:50:41.030564 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 06:50:41.030582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 06:50:41.030598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 06:50:41.030622 systemd[1]: Reached target slices.target - Slice Units. Oct 13 06:50:41.030636 systemd[1]: Reached target swap.target - Swaps. Oct 13 06:50:41.030663 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 06:50:41.030677 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 06:50:41.030693 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 06:50:41.030717 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 06:50:41.030733 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 06:50:41.030759 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 06:50:41.030775 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 06:50:41.030790 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 06:50:41.030805 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 06:50:41.030821 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 06:50:41.030835 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:41.030852 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 06:50:41.030873 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 06:50:41.030889 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 06:50:41.030908 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 06:50:41.030923 systemd[1]: Reached target machines.target - Containers. Oct 13 06:50:41.030938 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 06:50:41.030959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 06:50:41.030976 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 06:50:41.031001 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 06:50:41.031016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 06:50:41.031031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 06:50:41.031049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 06:50:41.031069 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 06:50:41.031084 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 06:50:41.031099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 06:50:41.031124 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 06:50:41.031153 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 06:50:41.031172 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 06:50:41.031187 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 06:50:41.031204 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 06:50:41.031221 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 06:50:41.031242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 06:50:41.031257 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 06:50:41.031276 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 06:50:41.031293 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 06:50:41.031313 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 06:50:41.031334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:41.031349 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 06:50:41.031364 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 06:50:41.031379 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 06:50:41.031394 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 06:50:41.031411 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 06:50:41.031433 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 06:50:41.031451 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 06:50:41.031499 systemd-journald[1266]: Collecting audit messages is disabled. Oct 13 06:50:41.031537 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 06:50:41.031553 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 06:50:41.031574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 06:50:41.031591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 06:50:41.031613 systemd-journald[1266]: Journal started Oct 13 06:50:41.031654 systemd-journald[1266]: Runtime Journal (/run/log/journal/30136c07b1534bcb831a27871b8f718b) is 4.7M, max 38.2M, 33.4M free. Oct 13 06:50:40.759728 systemd[1]: Queued start job for default target multi-user.target. Oct 13 06:50:40.777089 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 06:50:40.778134 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 06:50:41.034238 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 06:50:41.035894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 06:50:41.037206 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 06:50:41.037949 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 06:50:41.038171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 06:50:41.039475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 06:50:41.042102 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 06:50:41.048643 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 06:50:41.064078 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 06:50:41.067058 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 13 06:50:41.069320 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 06:50:41.070585 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 06:50:41.070621 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 06:50:41.072092 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 06:50:41.073677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 06:50:41.078295 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 06:50:41.084384 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 06:50:41.087856 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 06:50:41.093644 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 06:50:41.094188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 06:50:41.110839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 06:50:41.117376 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 06:50:41.121706 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 06:50:41.129282 kernel: ACPI: bus type drm_connector registered Oct 13 06:50:41.129353 kernel: fuse: init (API version 7.41) Oct 13 06:50:41.131322 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 06:50:41.132841 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 06:50:41.133606 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 06:50:41.136387 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 06:50:41.136586 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 06:50:41.139003 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 06:50:41.145370 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 06:50:41.149717 systemd-journald[1266]: Time spent on flushing to /var/log/journal/30136c07b1534bcb831a27871b8f718b is 73.647ms for 1164 entries. Oct 13 06:50:41.149717 systemd-journald[1266]: System Journal (/var/log/journal/30136c07b1534bcb831a27871b8f718b) is 8M, max 588.1M, 580.1M free. Oct 13 06:50:41.256527 systemd-journald[1266]: Received client request to flush runtime journal. Oct 13 06:50:41.256598 kernel: loop1: detected capacity change from 0 to 8 Oct 13 06:50:41.256641 kernel: loop2: detected capacity change from 0 to 128048 Oct 13 06:50:41.256664 kernel: loop3: detected capacity change from 0 to 229808 Oct 13 06:50:41.152970 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 06:50:41.161392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 06:50:41.198670 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 06:50:41.205663 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 06:50:41.209964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 06:50:41.228971 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 06:50:41.232728 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Oct 13 06:50:41.232743 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Oct 13 06:50:41.241789 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 06:50:41.246908 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 06:50:41.258258 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 06:50:41.292250 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 06:50:41.300250 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 06:50:41.306287 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 06:50:41.311388 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 06:50:41.320185 kernel: loop5: detected capacity change from 0 to 8 Oct 13 06:50:41.326198 kernel: loop6: detected capacity change from 0 to 128048 Oct 13 06:50:41.332422 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 06:50:41.341167 kernel: loop7: detected capacity change from 0 to 229808 Oct 13 06:50:41.353137 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Oct 13 06:50:41.354238 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Oct 13 06:50:41.361180 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 06:50:41.361790 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 06:50:41.373441 (sd-merge)[1345]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Oct 13 06:50:41.385370 (sd-merge)[1345]: Merged extensions into '/usr'. Oct 13 06:50:41.395194 systemd[1]: Reload requested from client PID 1310 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 06:50:41.395211 systemd[1]: Reloading... Oct 13 06:50:41.538170 zram_generator::config[1381]: No configuration found. Oct 13 06:50:41.566381 systemd-resolved[1343]: Positive Trust Anchors: Oct 13 06:50:41.566403 systemd-resolved[1343]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 06:50:41.566409 systemd-resolved[1343]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 13 06:50:41.566452 systemd-resolved[1343]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 06:50:41.593185 systemd-resolved[1343]: Using system hostname 'srv-ntuey.gb1.brightbox.com'. Oct 13 06:50:41.802797 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 06:50:41.802951 systemd[1]: Reloading finished in 407 ms. Oct 13 06:50:41.816160 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 06:50:41.827673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 06:50:41.829410 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 06:50:41.834976 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 06:50:41.838584 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 06:50:41.849303 systemd[1]: Starting ensure-sysext.service... Oct 13 06:50:41.852415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 06:50:41.866122 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 06:50:41.889927 systemd[1]: Reload requested from client PID 1437 ('systemctl') (unit ensure-sysext.service)... Oct 13 06:50:41.890079 systemd[1]: Reloading... Oct 13 06:50:41.902096 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 06:50:41.902125 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 06:50:41.902432 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 06:50:41.902746 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 06:50:41.903605 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 06:50:41.903883 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Oct 13 06:50:41.903956 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Oct 13 06:50:41.912629 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 06:50:41.912641 systemd-tmpfiles[1438]: Skipping /boot Oct 13 06:50:41.925117 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 06:50:41.925130 systemd-tmpfiles[1438]: Skipping /boot Oct 13 06:50:41.974194 zram_generator::config[1469]: No configuration found. Oct 13 06:50:42.201755 systemd[1]: Reloading finished in 311 ms. Oct 13 06:50:42.215201 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 06:50:42.216838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 06:50:42.235311 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 06:50:42.237330 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 06:50:42.240994 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 06:50:42.242590 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 06:50:42.247451 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 06:50:42.249472 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 06:50:42.259660 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:42.259882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 06:50:42.263348 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 06:50:42.265855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 06:50:42.270536 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 06:50:42.271442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 06:50:42.271893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 06:50:42.271998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:42.284046 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:42.285340 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 06:50:42.285577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 06:50:42.285670 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 06:50:42.285773 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:42.294341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:42.294601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 06:50:42.311199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 06:50:42.312351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 06:50:42.312489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 06:50:42.312653 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 06:50:42.315041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 06:50:42.315257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 06:50:42.322693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 06:50:42.323888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 06:50:42.329188 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 06:50:42.329474 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 06:50:42.337677 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 06:50:42.337831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 06:50:42.340008 systemd[1]: Finished ensure-sysext.service. Oct 13 06:50:42.344383 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 06:50:42.350677 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 06:50:42.351499 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 06:50:42.351767 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 06:50:42.403062 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 06:50:42.415651 systemd-udevd[1531]: Using default interface naming scheme 'v257'. Oct 13 06:50:42.423907 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 06:50:42.424961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 06:50:42.455406 augenrules[1567]: No rules Oct 13 06:50:42.455099 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 06:50:42.455354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 06:50:42.460390 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 06:50:42.460947 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 06:50:42.485380 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 06:50:42.499402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 06:50:42.618918 systemd-networkd[1575]: lo: Link UP Oct 13 06:50:42.619439 systemd-networkd[1575]: lo: Gained carrier Oct 13 06:50:42.624080 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 06:50:42.639217 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 06:50:42.640020 systemd[1]: Reached target network.target - Network. Oct 13 06:50:42.643540 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 06:50:42.646532 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 06:50:42.693031 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 06:50:42.776262 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 06:50:42.781888 systemd-networkd[1575]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 06:50:42.781898 systemd-networkd[1575]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 06:50:42.784499 systemd-networkd[1575]: eth0: Link UP Oct 13 06:50:42.784663 systemd-networkd[1575]: eth0: Gained carrier Oct 13 06:50:42.784682 systemd-networkd[1575]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 13 06:50:42.815343 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 13 06:50:42.833167 kernel: ACPI: button: Power Button [PWRF] Oct 13 06:50:42.838432 systemd-networkd[1575]: eth0: DHCPv4 address 10.244.93.206/30, gateway 10.244.93.205 acquired from 10.244.93.205 Oct 13 06:50:42.841862 systemd-timesyncd[1547]: Network configuration changed, trying to establish connection. Oct 13 06:50:42.852588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 06:50:42.856007 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 06:50:42.904181 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 06:50:42.905236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 06:50:42.906162 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 06:50:42.933281 ldconfig[1529]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 06:50:42.938353 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 06:50:42.941837 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 06:50:42.960955 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 06:50:42.961578 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 06:50:42.962091 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 06:50:42.962577 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 06:50:42.963003 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 06:50:42.963982 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 06:50:42.964885 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 06:50:42.965497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 06:50:42.966220 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 06:50:42.966259 systemd[1]: Reached target paths.target - Path Units. Oct 13 06:50:42.966796 systemd[1]: Reached target timers.target - Timer Units. Oct 13 06:50:42.968735 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 06:50:42.971729 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 06:50:42.975873 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 06:50:42.976718 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 06:50:42.977610 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 06:50:42.986742 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 06:50:42.987474 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 06:50:42.988856 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 06:50:42.990777 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 06:50:42.991204 systemd[1]: Reached target basic.target - Basic System. Oct 13 06:50:42.991606 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 06:50:42.991631 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 06:50:42.994312 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 06:50:42.996012 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 13 06:50:42.999356 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 06:50:43.003343 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 06:50:43.007343 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 06:50:43.015346 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 06:50:43.015826 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 06:50:43.019883 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 06:50:43.027959 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 06:50:43.032314 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 06:50:43.034559 jq[1630]: false Oct 13 06:50:43.036367 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 06:50:43.039008 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 06:50:43.044187 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:43.048891 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 06:50:43.050214 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 06:50:43.050757 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 06:50:43.053576 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 06:50:43.055904 google_oslogin_nss_cache[1632]: oslogin_cache_refresh[1632]: Refreshing passwd entry cache Oct 13 06:50:43.055564 oslogin_cache_refresh[1632]: Refreshing passwd entry cache Oct 13 06:50:43.061859 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 06:50:43.074299 oslogin_cache_refresh[1632]: Failure getting users, quitting Oct 13 06:50:43.075337 google_oslogin_nss_cache[1632]: oslogin_cache_refresh[1632]: Failure getting users, quitting Oct 13 06:50:43.075337 google_oslogin_nss_cache[1632]: oslogin_cache_refresh[1632]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 06:50:43.075337 google_oslogin_nss_cache[1632]: oslogin_cache_refresh[1632]: Refreshing group entry cache Oct 13 06:50:43.069199 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 06:50:43.074322 oslogin_cache_refresh[1632]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 06:50:43.069958 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 06:50:43.075279 oslogin_cache_refresh[1632]: Refreshing group entry cache Oct 13 06:50:43.070165 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 06:50:43.098176 google_oslogin_nss_cache[1632]: oslogin_cache_refresh[1632]: Failure getting groups, quitting Oct 13 06:50:43.098176 google_oslogin_nss_cache[1632]: oslogin_cache_refresh[1632]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 06:50:43.094708 oslogin_cache_refresh[1632]: Failure getting groups, quitting Oct 13 06:50:43.094721 oslogin_cache_refresh[1632]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 06:50:43.100504 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 06:50:43.100905 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 06:50:43.102829 dbus-daemon[1628]: [system] SELinux support is enabled Oct 13 06:50:43.103009 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 06:50:43.104995 dbus-daemon[1628]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1575 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 13 06:50:43.112299 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 06:50:43.113204 dbus-daemon[1628]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 06:50:43.112337 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 06:50:43.113868 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 06:50:43.113888 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 06:50:43.132126 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 13 06:50:43.132933 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 06:50:43.142119 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 06:50:43.150074 jq[1640]: true Oct 13 06:50:43.154168 extend-filesystems[1631]: Found /dev/vda6 Oct 13 06:50:43.170703 extend-filesystems[1631]: Found /dev/vda9 Oct 13 06:50:43.173457 (ntainerd)[1656]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 06:50:43.196651 extend-filesystems[1631]: Checking size of /dev/vda9 Oct 13 06:50:43.200698 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 06:50:43.203155 tar[1650]: linux-amd64/LICENSE Oct 13 06:50:43.203155 tar[1650]: linux-amd64/helm Oct 13 06:50:43.220762 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 06:50:43.221014 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 06:50:43.227855 extend-filesystems[1631]: Resized partition /dev/vda9 Oct 13 06:50:43.238942 extend-filesystems[1680]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 06:50:43.239962 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 06:50:43.241681 jq[1662]: true Oct 13 06:50:43.252160 update_engine[1639]: I20251013 06:50:43.250274 1639 main.cc:92] Flatcar Update Engine starting Oct 13 06:50:43.262119 systemd[1]: Started update-engine.service - Update Engine. Oct 13 06:50:43.270446 update_engine[1639]: I20251013 06:50:43.270179 1639 update_check_scheduler.cc:74] Next update check in 9m38s Oct 13 06:50:43.275884 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 06:50:43.294165 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Oct 13 06:50:43.466658 bash[1698]: Updated "/home/core/.ssh/authorized_keys" Oct 13 06:50:43.474288 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Oct 13 06:50:43.484349 extend-filesystems[1680]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 06:50:43.484349 extend-filesystems[1680]: old_desc_blocks = 1, new_desc_blocks = 7 Oct 13 06:50:43.484349 extend-filesystems[1680]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Oct 13 06:50:43.489056 extend-filesystems[1631]: Resized filesystem in /dev/vda9 Oct 13 06:50:43.489271 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 06:50:43.496571 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 06:50:43.496876 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 06:50:43.504342 systemd[1]: Starting sshkeys.service... Oct 13 06:50:43.555810 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 13 06:50:43.559991 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 13 06:50:43.609648 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:43.709928 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 06:50:43.855266 containerd[1656]: time="2025-10-13T06:50:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 06:50:43.858515 containerd[1656]: time="2025-10-13T06:50:43.858474374Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 06:50:43.883297 containerd[1656]: time="2025-10-13T06:50:43.883247105Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.149µs" Oct 13 06:50:43.883560 containerd[1656]: time="2025-10-13T06:50:43.883539224Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 06:50:43.883670 containerd[1656]: time="2025-10-13T06:50:43.883656406Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 06:50:43.883756 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 13 06:50:43.886446 containerd[1656]: time="2025-10-13T06:50:43.886337936Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 06:50:43.887487 containerd[1656]: time="2025-10-13T06:50:43.887006034Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 06:50:43.887487 containerd[1656]: time="2025-10-13T06:50:43.887056108Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 06:50:43.887487 containerd[1656]: time="2025-10-13T06:50:43.887160375Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 06:50:43.887487 containerd[1656]: time="2025-10-13T06:50:43.887181739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 06:50:43.887819 containerd[1656]: time="2025-10-13T06:50:43.887780744Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 06:50:43.888297 dbus-daemon[1628]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 13 06:50:43.888553 containerd[1656]: time="2025-10-13T06:50:43.888532483Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 06:50:43.889671 containerd[1656]: time="2025-10-13T06:50:43.889646665Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 06:50:43.890187 containerd[1656]: time="2025-10-13T06:50:43.889742176Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 06:50:43.890187 containerd[1656]: time="2025-10-13T06:50:43.889851263Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 06:50:43.891503 containerd[1656]: time="2025-10-13T06:50:43.890919135Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 06:50:43.891503 containerd[1656]: time="2025-10-13T06:50:43.890958281Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 06:50:43.891503 containerd[1656]: time="2025-10-13T06:50:43.890969375Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 06:50:43.891503 containerd[1656]: time="2025-10-13T06:50:43.891011697Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 06:50:43.891503 containerd[1656]: time="2025-10-13T06:50:43.891283300Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 06:50:43.891503 containerd[1656]: time="2025-10-13T06:50:43.891346263Z" level=info msg="metadata content store policy set" policy=shared Oct 13 06:50:43.892381 dbus-daemon[1628]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1658 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897058850Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897118510Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897132989Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897161553Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897175751Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897198970Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897218054Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897236972Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897251535Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897262612Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 06:50:43.897298 containerd[1656]: time="2025-10-13T06:50:43.897273000Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 06:50:43.897763 containerd[1656]: time="2025-10-13T06:50:43.897599797Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 06:50:43.897763 containerd[1656]: time="2025-10-13T06:50:43.897730758Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899331269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899373072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899392327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899404197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899414602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899425458Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899437135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899449212Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899469834Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899483812Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899606333Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899632518Z" level=info msg="Start snapshots syncer" Oct 13 06:50:43.900325 containerd[1656]: time="2025-10-13T06:50:43.899663641Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 06:50:43.900681 containerd[1656]: time="2025-10-13T06:50:43.899961871Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 06:50:43.900681 containerd[1656]: time="2025-10-13T06:50:43.900020083Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 06:50:43.903334 locksmithd[1683]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 06:50:43.906005 systemd[1]: Starting polkit.service - Authorization Manager... Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906060368Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906208646Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906235839Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906249422Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906263748Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906287316Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906300448Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906312516Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906341695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906369522Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906384094Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906426907Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906442910Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 06:50:43.909284 containerd[1656]: time="2025-10-13T06:50:43.906452538Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906462345Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906470613Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906479989Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906492271Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906510152Z" level=info msg="runtime interface created" Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906517675Z" level=info msg="created NRI interface" Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906526557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906542236Z" level=info msg="Connect containerd service" Oct 13 06:50:43.909630 containerd[1656]: time="2025-10-13T06:50:43.906573271Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 06:50:43.910834 containerd[1656]: time="2025-10-13T06:50:43.910413030Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 06:50:43.916824 systemd-logind[1638]: Watching system buttons on /dev/input/event3 (Power Button) Oct 13 06:50:43.917111 systemd-logind[1638]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 06:50:43.917528 systemd-logind[1638]: New seat seat0. Oct 13 06:50:43.918564 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 06:50:43.974378 systemd-networkd[1575]: eth0: Gained IPv6LL Oct 13 06:50:43.975042 systemd-timesyncd[1547]: Network configuration changed, trying to establish connection. Oct 13 06:50:43.980198 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 06:50:43.982095 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 06:50:43.987888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:50:43.991296 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 06:50:44.013880 sshd_keygen[1644]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 06:50:44.120249 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 06:50:44.147254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 06:50:44.151081 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 06:50:44.160537 systemd[1]: Started sshd@0-10.244.93.206:22-139.178.68.195:42854.service - OpenSSH per-connection server daemon (139.178.68.195:42854). Oct 13 06:50:44.167195 polkitd[1726]: Started polkitd version 126 Oct 13 06:50:44.178077 polkitd[1726]: Loading rules from directory /etc/polkit-1/rules.d Oct 13 06:50:44.179038 polkitd[1726]: Loading rules from directory /run/polkit-1/rules.d Oct 13 06:50:44.179102 polkitd[1726]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 13 06:50:44.179456 polkitd[1726]: Loading rules from directory /usr/local/share/polkit-1/rules.d Oct 13 06:50:44.179485 polkitd[1726]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Oct 13 06:50:44.179526 polkitd[1726]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 13 06:50:44.183475 polkitd[1726]: Finished loading, compiling and executing 2 rules Oct 13 06:50:44.185421 systemd[1]: Started polkit.service - Authorization Manager. Oct 13 06:50:44.187414 dbus-daemon[1628]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 13 06:50:44.188227 polkitd[1726]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 13 06:50:44.205090 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 06:50:44.205406 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 06:50:44.242944 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 06:50:44.264975 systemd-hostnamed[1658]: Hostname set to (static) Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268097984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268201658Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268245963Z" level=info msg="Start subscribing containerd event" Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268313027Z" level=info msg="Start recovering state" Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268511787Z" level=info msg="Start event monitor" Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268527365Z" level=info msg="Start cni network conf syncer for default" Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268537251Z" level=info msg="Start streaming server" Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268554770Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268562985Z" level=info msg="runtime interface starting up..." Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268571837Z" level=info msg="starting plugins..." Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268599251Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 06:50:44.269167 containerd[1656]: time="2025-10-13T06:50:44.268738983Z" level=info msg="containerd successfully booted in 0.416601s" Oct 13 06:50:44.269306 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 06:50:44.291661 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 06:50:44.299318 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 06:50:44.302872 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 06:50:44.303629 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 06:50:44.454062 tar[1650]: linux-amd64/README.md Oct 13 06:50:44.470358 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 06:50:44.990612 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:44.999373 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:45.080813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:50:45.094412 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 06:50:45.186204 sshd[1757]: Accepted publickey for core from 139.178.68.195 port 42854 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:50:45.191330 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:50:45.211177 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 06:50:45.215311 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 06:50:45.224018 systemd-logind[1638]: New session 1 of user core. Oct 13 06:50:45.246021 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 06:50:45.252602 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 06:50:45.268453 (systemd)[1796]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 06:50:45.275407 systemd-logind[1638]: New session c1 of user core. Oct 13 06:50:45.411352 systemd[1796]: Queued start job for default target default.target. Oct 13 06:50:45.416444 systemd[1796]: Created slice app.slice - User Application Slice. Oct 13 06:50:45.416474 systemd[1796]: Reached target paths.target - Paths. Oct 13 06:50:45.416790 systemd[1796]: Reached target timers.target - Timers. Oct 13 06:50:45.419515 systemd[1796]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 06:50:45.436201 systemd[1796]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 06:50:45.436314 systemd[1796]: Reached target sockets.target - Sockets. Oct 13 06:50:45.436353 systemd[1796]: Reached target basic.target - Basic System. Oct 13 06:50:45.436389 systemd[1796]: Reached target default.target - Main User Target. Oct 13 06:50:45.436432 systemd[1796]: Startup finished in 147ms. Oct 13 06:50:45.436613 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 06:50:45.451463 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 06:50:45.485063 systemd-timesyncd[1547]: Network configuration changed, trying to establish connection. Oct 13 06:50:45.487824 systemd-networkd[1575]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:1773:24:19ff:fef4:5dce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:1773:24:19ff:fef4:5dce/64 assigned by NDisc. Oct 13 06:50:45.487840 systemd-networkd[1575]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 13 06:50:45.660976 kubelet[1789]: E1013 06:50:45.660804 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 06:50:45.664572 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 06:50:45.664955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 06:50:45.665921 systemd[1]: kubelet.service: Consumed 1.107s CPU time, 267.7M memory peak. Oct 13 06:50:46.109181 systemd[1]: Started sshd@1-10.244.93.206:22-139.178.68.195:42862.service - OpenSSH per-connection server daemon (139.178.68.195:42862). Oct 13 06:50:47.011307 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:47.018197 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:47.047740 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 42862 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:50:47.050640 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:50:47.061215 systemd-logind[1638]: New session 2 of user core. Oct 13 06:50:47.068381 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 06:50:47.304474 systemd-timesyncd[1547]: Network configuration changed, trying to establish connection. Oct 13 06:50:47.686378 sshd[1816]: Connection closed by 139.178.68.195 port 42862 Oct 13 06:50:47.689787 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Oct 13 06:50:47.699278 systemd-logind[1638]: Session 2 logged out. Waiting for processes to exit. Oct 13 06:50:47.700271 systemd[1]: sshd@1-10.244.93.206:22-139.178.68.195:42862.service: Deactivated successfully. Oct 13 06:50:47.702826 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 06:50:47.706496 systemd-logind[1638]: Removed session 2. Oct 13 06:50:47.850055 systemd[1]: Started sshd@2-10.244.93.206:22-139.178.68.195:43702.service - OpenSSH per-connection server daemon (139.178.68.195:43702). Oct 13 06:50:48.803057 sshd[1822]: Accepted publickey for core from 139.178.68.195 port 43702 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:50:48.806107 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:50:48.815814 systemd-logind[1638]: New session 3 of user core. Oct 13 06:50:48.827461 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 06:50:49.435272 sshd[1825]: Connection closed by 139.178.68.195 port 43702 Oct 13 06:50:49.434807 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Oct 13 06:50:49.444925 systemd[1]: sshd@2-10.244.93.206:22-139.178.68.195:43702.service: Deactivated successfully. Oct 13 06:50:49.448568 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 06:50:49.450617 systemd-logind[1638]: Session 3 logged out. Waiting for processes to exit. Oct 13 06:50:49.452514 systemd-logind[1638]: Removed session 3. Oct 13 06:50:49.708013 login[1778]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 06:50:49.715595 systemd-logind[1638]: New session 4 of user core. Oct 13 06:50:49.715859 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 06:50:49.738256 login[1777]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 13 06:50:49.743588 systemd-logind[1638]: New session 5 of user core. Oct 13 06:50:49.749368 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 06:50:51.034882 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:51.038200 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Oct 13 06:50:51.051349 coreos-metadata[1713]: Oct 13 06:50:51.051 WARN failed to locate config-drive, using the metadata service API instead Oct 13 06:50:51.053268 coreos-metadata[1627]: Oct 13 06:50:51.053 WARN failed to locate config-drive, using the metadata service API instead Oct 13 06:50:51.079026 coreos-metadata[1713]: Oct 13 06:50:51.078 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 13 06:50:51.081052 coreos-metadata[1627]: Oct 13 06:50:51.080 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 13 06:50:51.089401 coreos-metadata[1627]: Oct 13 06:50:51.089 INFO Fetch failed with 404: resource not found Oct 13 06:50:51.089401 coreos-metadata[1627]: Oct 13 06:50:51.089 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 13 06:50:51.090321 coreos-metadata[1627]: Oct 13 06:50:51.090 INFO Fetch successful Oct 13 06:50:51.090432 coreos-metadata[1627]: Oct 13 06:50:51.090 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 13 06:50:51.106427 coreos-metadata[1713]: Oct 13 06:50:51.106 INFO Fetch successful Oct 13 06:50:51.106819 coreos-metadata[1713]: Oct 13 06:50:51.106 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 13 06:50:51.109087 coreos-metadata[1627]: Oct 13 06:50:51.109 INFO Fetch successful Oct 13 06:50:51.109087 coreos-metadata[1627]: Oct 13 06:50:51.109 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 13 06:50:51.123179 coreos-metadata[1627]: Oct 13 06:50:51.123 INFO Fetch successful Oct 13 06:50:51.123179 coreos-metadata[1627]: Oct 13 06:50:51.123 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 13 06:50:51.135808 coreos-metadata[1627]: Oct 13 06:50:51.135 INFO Fetch successful Oct 13 06:50:51.135808 coreos-metadata[1627]: Oct 13 06:50:51.135 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 13 06:50:51.139059 coreos-metadata[1713]: Oct 13 06:50:51.138 INFO Fetch successful Oct 13 06:50:51.141622 unknown[1713]: wrote ssh authorized keys file for user: core Oct 13 06:50:51.150555 coreos-metadata[1627]: Oct 13 06:50:51.150 INFO Fetch successful Oct 13 06:50:51.168236 update-ssh-keys[1860]: Updated "/home/core/.ssh/authorized_keys" Oct 13 06:50:51.170732 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 13 06:50:51.176199 systemd[1]: Finished sshkeys.service. Oct 13 06:50:51.186438 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 13 06:50:51.187003 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 06:50:51.187270 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 06:50:51.187440 systemd[1]: Startup finished in 2.813s (kernel) + 13.120s (initrd) + 11.147s (userspace) = 27.081s. Oct 13 06:50:55.916769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 06:50:55.920926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:50:56.122497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:50:56.132945 (kubelet)[1877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 06:50:56.189386 kubelet[1877]: E1013 06:50:56.189241 1877 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 06:50:56.195338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 06:50:56.195706 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 06:50:56.196552 systemd[1]: kubelet.service: Consumed 226ms CPU time, 109.4M memory peak. Oct 13 06:50:59.604199 systemd[1]: Started sshd@3-10.244.93.206:22-139.178.68.195:51590.service - OpenSSH per-connection server daemon (139.178.68.195:51590). Oct 13 06:51:00.528296 sshd[1885]: Accepted publickey for core from 139.178.68.195 port 51590 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:51:00.531450 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:51:00.543233 systemd-logind[1638]: New session 6 of user core. Oct 13 06:51:00.553323 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 06:51:01.161757 sshd[1888]: Connection closed by 139.178.68.195 port 51590 Oct 13 06:51:01.163236 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Oct 13 06:51:01.173386 systemd-logind[1638]: Session 6 logged out. Waiting for processes to exit. Oct 13 06:51:01.175014 systemd[1]: sshd@3-10.244.93.206:22-139.178.68.195:51590.service: Deactivated successfully. Oct 13 06:51:01.177685 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 06:51:01.179984 systemd-logind[1638]: Removed session 6. Oct 13 06:51:01.328476 systemd[1]: Started sshd@4-10.244.93.206:22-139.178.68.195:51594.service - OpenSSH per-connection server daemon (139.178.68.195:51594). Oct 13 06:51:02.264446 sshd[1894]: Accepted publickey for core from 139.178.68.195 port 51594 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:51:02.266545 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:51:02.275220 systemd-logind[1638]: New session 7 of user core. Oct 13 06:51:02.277341 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 06:51:02.887190 sshd[1897]: Connection closed by 139.178.68.195 port 51594 Oct 13 06:51:02.886346 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Oct 13 06:51:02.894965 systemd[1]: sshd@4-10.244.93.206:22-139.178.68.195:51594.service: Deactivated successfully. Oct 13 06:51:02.897943 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 06:51:02.900046 systemd-logind[1638]: Session 7 logged out. Waiting for processes to exit. Oct 13 06:51:02.901571 systemd-logind[1638]: Removed session 7. Oct 13 06:51:03.046994 systemd[1]: Started sshd@5-10.244.93.206:22-139.178.68.195:51600.service - OpenSSH per-connection server daemon (139.178.68.195:51600). Oct 13 06:51:03.974747 sshd[1903]: Accepted publickey for core from 139.178.68.195 port 51600 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:51:03.977875 sshd-session[1903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:51:03.991238 systemd-logind[1638]: New session 8 of user core. Oct 13 06:51:03.998475 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 06:51:04.611477 sshd[1906]: Connection closed by 139.178.68.195 port 51600 Oct 13 06:51:04.612842 sshd-session[1903]: pam_unix(sshd:session): session closed for user core Oct 13 06:51:04.622225 systemd[1]: sshd@5-10.244.93.206:22-139.178.68.195:51600.service: Deactivated successfully. Oct 13 06:51:04.626083 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 06:51:04.627517 systemd-logind[1638]: Session 8 logged out. Waiting for processes to exit. Oct 13 06:51:04.628916 systemd-logind[1638]: Removed session 8. Oct 13 06:51:04.779318 systemd[1]: Started sshd@6-10.244.93.206:22-139.178.68.195:51610.service - OpenSSH per-connection server daemon (139.178.68.195:51610). Oct 13 06:51:05.707356 sshd[1912]: Accepted publickey for core from 139.178.68.195 port 51610 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:51:05.710360 sshd-session[1912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:51:05.721585 systemd-logind[1638]: New session 9 of user core. Oct 13 06:51:05.738471 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 06:51:06.212497 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 06:51:06.212775 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 06:51:06.214055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 06:51:06.218675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:51:06.233699 sudo[1916]: pam_unix(sudo:session): session closed for user root Oct 13 06:51:06.380172 sshd[1915]: Connection closed by 139.178.68.195 port 51610 Oct 13 06:51:06.381320 sshd-session[1912]: pam_unix(sshd:session): session closed for user core Oct 13 06:51:06.385738 systemd-logind[1638]: Session 9 logged out. Waiting for processes to exit. Oct 13 06:51:06.386232 systemd[1]: sshd@6-10.244.93.206:22-139.178.68.195:51610.service: Deactivated successfully. Oct 13 06:51:06.389400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:06.389822 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 06:51:06.392725 systemd-logind[1638]: Removed session 9. Oct 13 06:51:06.401405 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 06:51:06.442740 kubelet[1926]: E1013 06:51:06.442682 1926 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 06:51:06.445524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 06:51:06.445826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 06:51:06.446554 systemd[1]: kubelet.service: Consumed 188ms CPU time, 110.7M memory peak. Oct 13 06:51:06.543916 systemd[1]: Started sshd@7-10.244.93.206:22-139.178.68.195:51622.service - OpenSSH per-connection server daemon (139.178.68.195:51622). Oct 13 06:51:07.479172 sshd[1937]: Accepted publickey for core from 139.178.68.195 port 51622 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:51:07.482080 sshd-session[1937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:51:07.494954 systemd-logind[1638]: New session 10 of user core. Oct 13 06:51:07.499546 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 06:51:07.976922 sudo[1942]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 06:51:07.978783 sudo[1942]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 06:51:07.987515 sudo[1942]: pam_unix(sudo:session): session closed for user root Oct 13 06:51:07.994678 sudo[1941]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 06:51:07.994948 sudo[1941]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 06:51:08.011425 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 06:51:08.054799 augenrules[1964]: No rules Oct 13 06:51:08.056785 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 06:51:08.057047 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 06:51:08.059612 sudo[1941]: pam_unix(sudo:session): session closed for user root Oct 13 06:51:08.210201 sshd[1940]: Connection closed by 139.178.68.195 port 51622 Oct 13 06:51:08.209193 sshd-session[1937]: pam_unix(sshd:session): session closed for user core Oct 13 06:51:08.218960 systemd-logind[1638]: Session 10 logged out. Waiting for processes to exit. Oct 13 06:51:08.219965 systemd[1]: sshd@7-10.244.93.206:22-139.178.68.195:51622.service: Deactivated successfully. Oct 13 06:51:08.222652 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 06:51:08.226056 systemd-logind[1638]: Removed session 10. Oct 13 06:51:08.380686 systemd[1]: Started sshd@8-10.244.93.206:22-139.178.68.195:58226.service - OpenSSH per-connection server daemon (139.178.68.195:58226). Oct 13 06:51:09.323006 sshd[1973]: Accepted publickey for core from 139.178.68.195 port 58226 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:51:09.326192 sshd-session[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:51:09.336976 systemd-logind[1638]: New session 11 of user core. Oct 13 06:51:09.346407 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 06:51:09.811980 sudo[1977]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 06:51:09.812366 sudo[1977]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 06:51:10.333371 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 06:51:10.351748 (dockerd)[1994]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 06:51:10.707027 dockerd[1994]: time="2025-10-13T06:51:10.706955152Z" level=info msg="Starting up" Oct 13 06:51:10.708495 dockerd[1994]: time="2025-10-13T06:51:10.708466054Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 06:51:10.738919 dockerd[1994]: time="2025-10-13T06:51:10.738725573Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 06:51:10.788390 dockerd[1994]: time="2025-10-13T06:51:10.787861006Z" level=info msg="Loading containers: start." Oct 13 06:51:10.811184 kernel: Initializing XFRM netlink socket Oct 13 06:51:11.077217 systemd-timesyncd[1547]: Network configuration changed, trying to establish connection. Oct 13 06:51:11.129185 systemd-networkd[1575]: docker0: Link UP Oct 13 06:51:11.131757 dockerd[1994]: time="2025-10-13T06:51:11.131710972Z" level=info msg="Loading containers: done." Oct 13 06:51:11.149492 dockerd[1994]: time="2025-10-13T06:51:11.149436621Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 06:51:11.149669 dockerd[1994]: time="2025-10-13T06:51:11.149540667Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 06:51:11.149669 dockerd[1994]: time="2025-10-13T06:51:11.149621791Z" level=info msg="Initializing buildkit" Oct 13 06:51:11.149881 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1564464446-merged.mount: Deactivated successfully. Oct 13 06:51:11.175332 dockerd[1994]: time="2025-10-13T06:51:11.175263623Z" level=info msg="Completed buildkit initialization" Oct 13 06:51:11.190178 dockerd[1994]: time="2025-10-13T06:51:11.189526383Z" level=info msg="Daemon has completed initialization" Oct 13 06:51:11.190178 dockerd[1994]: time="2025-10-13T06:51:11.189629274Z" level=info msg="API listen on /run/docker.sock" Oct 13 06:51:11.190889 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 06:51:11.414779 systemd-timesyncd[1547]: Contacted time server [2a03:b0c0:1:d0::1f9:f001]:123 (2.flatcar.pool.ntp.org). Oct 13 06:51:11.416113 systemd-timesyncd[1547]: Initial clock synchronization to Mon 2025-10-13 06:51:11.625483 UTC. Oct 13 06:51:12.341232 containerd[1656]: time="2025-10-13T06:51:12.340725887Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 13 06:51:13.222402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056007452.mount: Deactivated successfully. Oct 13 06:51:14.779414 containerd[1656]: time="2025-10-13T06:51:14.778384747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:14.779414 containerd[1656]: time="2025-10-13T06:51:14.779116830Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Oct 13 06:51:14.779414 containerd[1656]: time="2025-10-13T06:51:14.779352255Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:14.781748 containerd[1656]: time="2025-10-13T06:51:14.781717292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:14.783610 containerd[1656]: time="2025-10-13T06:51:14.783582318Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.442678105s" Oct 13 06:51:14.783739 containerd[1656]: time="2025-10-13T06:51:14.783726187Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 13 06:51:14.788898 containerd[1656]: time="2025-10-13T06:51:14.788863362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 13 06:51:15.532755 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 13 06:51:16.592289 containerd[1656]: time="2025-10-13T06:51:16.592229533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:16.592942 containerd[1656]: time="2025-10-13T06:51:16.592912290Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Oct 13 06:51:16.593837 containerd[1656]: time="2025-10-13T06:51:16.593807977Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:16.596452 containerd[1656]: time="2025-10-13T06:51:16.596424113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:16.597562 containerd[1656]: time="2025-10-13T06:51:16.597536186Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.808640469s" Oct 13 06:51:16.597677 containerd[1656]: time="2025-10-13T06:51:16.597662142Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 13 06:51:16.598490 containerd[1656]: time="2025-10-13T06:51:16.598284746Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 13 06:51:16.623282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 13 06:51:16.629793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:51:16.840866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:16.858494 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 06:51:16.927833 kubelet[2282]: E1013 06:51:16.927710 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 06:51:16.931595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 06:51:16.931792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 06:51:16.932239 systemd[1]: kubelet.service: Consumed 228ms CPU time, 108.5M memory peak. Oct 13 06:51:19.245061 containerd[1656]: time="2025-10-13T06:51:19.244981539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:19.246820 containerd[1656]: time="2025-10-13T06:51:19.246796740Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Oct 13 06:51:19.247633 containerd[1656]: time="2025-10-13T06:51:19.247608285Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:19.250316 containerd[1656]: time="2025-10-13T06:51:19.250276235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:19.251810 containerd[1656]: time="2025-10-13T06:51:19.251723759Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.653412655s" Oct 13 06:51:19.251810 containerd[1656]: time="2025-10-13T06:51:19.251753669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 13 06:51:19.252569 containerd[1656]: time="2025-10-13T06:51:19.252541824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 13 06:51:20.844604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1250148331.mount: Deactivated successfully. Oct 13 06:51:21.370844 containerd[1656]: time="2025-10-13T06:51:21.370004095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:21.372555 containerd[1656]: time="2025-10-13T06:51:21.372116172Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Oct 13 06:51:21.372939 containerd[1656]: time="2025-10-13T06:51:21.372867656Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:21.378884 containerd[1656]: time="2025-10-13T06:51:21.378818809Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.126236582s" Oct 13 06:51:21.378938 containerd[1656]: time="2025-10-13T06:51:21.378901206Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 13 06:51:21.379568 containerd[1656]: time="2025-10-13T06:51:21.379011558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:21.380180 containerd[1656]: time="2025-10-13T06:51:21.379901511Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 13 06:51:22.073663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448886653.mount: Deactivated successfully. Oct 13 06:51:24.095375 containerd[1656]: time="2025-10-13T06:51:24.094336079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:24.095375 containerd[1656]: time="2025-10-13T06:51:24.095089612Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Oct 13 06:51:24.095375 containerd[1656]: time="2025-10-13T06:51:24.095325091Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:24.097666 containerd[1656]: time="2025-10-13T06:51:24.097641602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:24.098796 containerd[1656]: time="2025-10-13T06:51:24.098773325Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.718817043s" Oct 13 06:51:24.098897 containerd[1656]: time="2025-10-13T06:51:24.098883197Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 13 06:51:24.100099 containerd[1656]: time="2025-10-13T06:51:24.100073863Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 13 06:51:25.197047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209938552.mount: Deactivated successfully. Oct 13 06:51:25.201257 containerd[1656]: time="2025-10-13T06:51:25.200482555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 06:51:25.201257 containerd[1656]: time="2025-10-13T06:51:25.201039097Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 13 06:51:25.201257 containerd[1656]: time="2025-10-13T06:51:25.201214104Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 06:51:25.202952 containerd[1656]: time="2025-10-13T06:51:25.202928528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 06:51:25.203650 containerd[1656]: time="2025-10-13T06:51:25.203623493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.103519766s" Oct 13 06:51:25.203721 containerd[1656]: time="2025-10-13T06:51:25.203658220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 13 06:51:25.204512 containerd[1656]: time="2025-10-13T06:51:25.204343678Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 13 06:51:25.910479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1801275985.mount: Deactivated successfully. Oct 13 06:51:27.122885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 13 06:51:27.127019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:51:27.327010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:27.339551 (kubelet)[2412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 06:51:27.414406 kubelet[2412]: E1013 06:51:27.413996 2412 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 06:51:27.420647 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 06:51:27.421065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 06:51:27.422175 systemd[1]: kubelet.service: Consumed 235ms CPU time, 108.3M memory peak. Oct 13 06:51:28.120011 update_engine[1639]: I20251013 06:51:28.118351 1639 update_attempter.cc:509] Updating boot flags... Oct 13 06:51:33.247856 containerd[1656]: time="2025-10-13T06:51:33.247777611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:33.248830 containerd[1656]: time="2025-10-13T06:51:33.248653760Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Oct 13 06:51:33.250042 containerd[1656]: time="2025-10-13T06:51:33.249984289Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:33.252655 containerd[1656]: time="2025-10-13T06:51:33.252611303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:33.254132 containerd[1656]: time="2025-10-13T06:51:33.253739952Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 8.049366744s" Oct 13 06:51:33.254132 containerd[1656]: time="2025-10-13T06:51:33.253777942Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 13 06:51:37.120096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:37.120462 systemd[1]: kubelet.service: Consumed 235ms CPU time, 108.3M memory peak. Oct 13 06:51:37.124271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:51:37.164258 systemd[1]: Reload requested from client PID 2473 ('systemctl') (unit session-11.scope)... Oct 13 06:51:37.164291 systemd[1]: Reloading... Oct 13 06:51:37.299196 zram_generator::config[2517]: No configuration found. Oct 13 06:51:37.552775 systemd[1]: Reloading finished in 387 ms. Oct 13 06:51:37.623616 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 06:51:37.623716 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 06:51:37.624034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:37.624090 systemd[1]: kubelet.service: Consumed 132ms CPU time, 98.3M memory peak. Oct 13 06:51:37.625889 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:51:37.795936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:37.814802 (kubelet)[2586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 06:51:37.866447 kubelet[2586]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:51:37.866447 kubelet[2586]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 06:51:37.866447 kubelet[2586]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:51:37.868692 kubelet[2586]: I1013 06:51:37.868626 2586 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 06:51:38.427701 kubelet[2586]: I1013 06:51:38.427651 2586 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 06:51:38.428246 kubelet[2586]: I1013 06:51:38.428007 2586 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 06:51:38.429018 kubelet[2586]: I1013 06:51:38.428995 2586 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 06:51:38.466306 kubelet[2586]: I1013 06:51:38.466272 2586 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 06:51:38.470153 kubelet[2586]: E1013 06:51:38.468887 2586 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.93.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 06:51:38.487216 kubelet[2586]: I1013 06:51:38.487184 2586 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 06:51:38.496475 kubelet[2586]: I1013 06:51:38.496445 2586 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 06:51:38.499346 kubelet[2586]: I1013 06:51:38.499293 2586 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 06:51:38.502022 kubelet[2586]: I1013 06:51:38.499347 2586 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ntuey.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 06:51:38.502261 kubelet[2586]: I1013 06:51:38.502037 2586 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 06:51:38.502261 kubelet[2586]: I1013 06:51:38.502054 2586 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 06:51:38.502261 kubelet[2586]: I1013 06:51:38.502237 2586 state_mem.go:36] "Initialized new in-memory state store" Oct 13 06:51:38.505619 kubelet[2586]: I1013 06:51:38.505219 2586 kubelet.go:480] "Attempting to sync node with API server" Oct 13 06:51:38.505619 kubelet[2586]: I1013 06:51:38.505247 2586 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 06:51:38.505619 kubelet[2586]: I1013 06:51:38.505282 2586 kubelet.go:386] "Adding apiserver pod source" Oct 13 06:51:38.505619 kubelet[2586]: I1013 06:51:38.505301 2586 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 06:51:38.514440 kubelet[2586]: E1013 06:51:38.514408 2586 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.93.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ntuey.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 06:51:38.514927 kubelet[2586]: I1013 06:51:38.514909 2586 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 06:51:38.517136 kubelet[2586]: I1013 06:51:38.517115 2586 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 06:51:38.517888 kubelet[2586]: W1013 06:51:38.517873 2586 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 06:51:38.526561 kubelet[2586]: E1013 06:51:38.526501 2586 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.93.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 06:51:38.533176 kubelet[2586]: I1013 06:51:38.533089 2586 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 06:51:38.533708 kubelet[2586]: I1013 06:51:38.533490 2586 server.go:1289] "Started kubelet" Oct 13 06:51:38.537561 kubelet[2586]: I1013 06:51:38.536961 2586 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 06:51:38.542131 kubelet[2586]: I1013 06:51:38.542087 2586 server.go:317] "Adding debug handlers to kubelet server" Oct 13 06:51:38.542736 kubelet[2586]: I1013 06:51:38.542611 2586 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 06:51:38.543374 kubelet[2586]: I1013 06:51:38.543357 2586 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 06:51:38.545817 kubelet[2586]: E1013 06:51:38.543562 2586 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.93.206:6443/api/v1/namespaces/default/events\": dial tcp 10.244.93.206:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-ntuey.gb1.brightbox.com.186dfa5b189a141e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ntuey.gb1.brightbox.com,UID:srv-ntuey.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ntuey.gb1.brightbox.com,},FirstTimestamp:2025-10-13 06:51:38.533450782 +0000 UTC m=+0.713791727,LastTimestamp:2025-10-13 06:51:38.533450782 +0000 UTC m=+0.713791727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ntuey.gb1.brightbox.com,}" Oct 13 06:51:38.549666 kubelet[2586]: I1013 06:51:38.549652 2586 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 06:51:38.550932 kubelet[2586]: I1013 06:51:38.550900 2586 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 06:51:38.559210 kubelet[2586]: E1013 06:51:38.559175 2586 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-ntuey.gb1.brightbox.com\" not found" Oct 13 06:51:38.559287 kubelet[2586]: I1013 06:51:38.559228 2586 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 06:51:38.559490 kubelet[2586]: I1013 06:51:38.559473 2586 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 06:51:38.559561 kubelet[2586]: I1013 06:51:38.559549 2586 reconciler.go:26] "Reconciler: start to sync state" Oct 13 06:51:38.560159 kubelet[2586]: E1013 06:51:38.560026 2586 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.93.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 06:51:38.563832 kubelet[2586]: E1013 06:51:38.563778 2586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ntuey.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.206:6443: connect: connection refused" interval="200ms" Oct 13 06:51:38.564483 kubelet[2586]: I1013 06:51:38.564353 2586 factory.go:223] Registration of the systemd container factory successfully Oct 13 06:51:38.564483 kubelet[2586]: I1013 06:51:38.564468 2586 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 06:51:38.566496 kubelet[2586]: E1013 06:51:38.566471 2586 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 06:51:38.566787 kubelet[2586]: I1013 06:51:38.566766 2586 factory.go:223] Registration of the containerd container factory successfully Oct 13 06:51:38.578003 kubelet[2586]: I1013 06:51:38.577863 2586 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 06:51:38.579130 kubelet[2586]: I1013 06:51:38.579114 2586 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 06:51:38.579250 kubelet[2586]: I1013 06:51:38.579242 2586 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 06:51:38.579320 kubelet[2586]: I1013 06:51:38.579312 2586 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 06:51:38.579369 kubelet[2586]: I1013 06:51:38.579363 2586 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 06:51:38.579481 kubelet[2586]: E1013 06:51:38.579462 2586 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 06:51:38.590481 kubelet[2586]: E1013 06:51:38.590458 2586 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.93.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 06:51:38.597889 kubelet[2586]: I1013 06:51:38.597865 2586 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 06:51:38.597889 kubelet[2586]: I1013 06:51:38.597884 2586 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 06:51:38.597993 kubelet[2586]: I1013 06:51:38.597903 2586 state_mem.go:36] "Initialized new in-memory state store" Oct 13 06:51:38.599506 kubelet[2586]: I1013 06:51:38.599482 2586 policy_none.go:49] "None policy: Start" Oct 13 06:51:38.599562 kubelet[2586]: I1013 06:51:38.599510 2586 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 06:51:38.599562 kubelet[2586]: I1013 06:51:38.599529 2586 state_mem.go:35] "Initializing new in-memory state store" Oct 13 06:51:38.606694 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 06:51:38.620552 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 06:51:38.639700 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 06:51:38.642517 kubelet[2586]: E1013 06:51:38.642359 2586 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 06:51:38.642846 kubelet[2586]: I1013 06:51:38.642779 2586 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 06:51:38.642904 kubelet[2586]: I1013 06:51:38.642818 2586 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 06:51:38.643511 kubelet[2586]: I1013 06:51:38.643481 2586 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 06:51:38.645575 kubelet[2586]: E1013 06:51:38.645550 2586 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 06:51:38.645742 kubelet[2586]: E1013 06:51:38.645672 2586 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-ntuey.gb1.brightbox.com\" not found" Oct 13 06:51:38.704115 systemd[1]: Created slice kubepods-burstable-pod81a2bdd7e2846aa7030c307fe31ee9f9.slice - libcontainer container kubepods-burstable-pod81a2bdd7e2846aa7030c307fe31ee9f9.slice. Oct 13 06:51:38.715872 kubelet[2586]: E1013 06:51:38.715799 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.719596 systemd[1]: Created slice kubepods-burstable-podcf1ca1f8b80191211dbfa565027d0087.slice - libcontainer container kubepods-burstable-podcf1ca1f8b80191211dbfa565027d0087.slice. Oct 13 06:51:38.724177 kubelet[2586]: E1013 06:51:38.723927 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.724964 systemd[1]: Created slice kubepods-burstable-pod22a536a2a6bd58ba407f82ae9abee176.slice - libcontainer container kubepods-burstable-pod22a536a2a6bd58ba407f82ae9abee176.slice. Oct 13 06:51:38.729552 kubelet[2586]: E1013 06:51:38.729484 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.745004 kubelet[2586]: I1013 06:51:38.744971 2586 kubelet_node_status.go:75] "Attempting to register node" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.745369 kubelet[2586]: E1013 06:51:38.745339 2586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.93.206:6443/api/v1/nodes\": dial tcp 10.244.93.206:6443: connect: connection refused" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.760372 kubelet[2586]: I1013 06:51:38.760310 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81a2bdd7e2846aa7030c307fe31ee9f9-ca-certs\") pod \"kube-apiserver-srv-ntuey.gb1.brightbox.com\" (UID: \"81a2bdd7e2846aa7030c307fe31ee9f9\") " pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.760722 kubelet[2586]: I1013 06:51:38.760618 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81a2bdd7e2846aa7030c307fe31ee9f9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ntuey.gb1.brightbox.com\" (UID: \"81a2bdd7e2846aa7030c307fe31ee9f9\") " pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.760931 kubelet[2586]: I1013 06:51:38.760704 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-ca-certs\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.761121 kubelet[2586]: I1013 06:51:38.761055 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-flexvolume-dir\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.761438 kubelet[2586]: I1013 06:51:38.761289 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-kubeconfig\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.761438 kubelet[2586]: I1013 06:51:38.761371 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.761698 kubelet[2586]: I1013 06:51:38.761413 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a536a2a6bd58ba407f82ae9abee176-kubeconfig\") pod \"kube-scheduler-srv-ntuey.gb1.brightbox.com\" (UID: \"22a536a2a6bd58ba407f82ae9abee176\") " pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.761875 kubelet[2586]: I1013 06:51:38.761674 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81a2bdd7e2846aa7030c307fe31ee9f9-k8s-certs\") pod \"kube-apiserver-srv-ntuey.gb1.brightbox.com\" (UID: \"81a2bdd7e2846aa7030c307fe31ee9f9\") " pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.762017 kubelet[2586]: I1013 06:51:38.761852 2586 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-k8s-certs\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.764633 kubelet[2586]: E1013 06:51:38.764567 2586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ntuey.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.206:6443: connect: connection refused" interval="400ms" Oct 13 06:51:38.950430 kubelet[2586]: I1013 06:51:38.950371 2586 kubelet_node_status.go:75] "Attempting to register node" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:38.951587 kubelet[2586]: E1013 06:51:38.951030 2586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.93.206:6443/api/v1/nodes\": dial tcp 10.244.93.206:6443: connect: connection refused" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:39.018225 containerd[1656]: time="2025-10-13T06:51:39.017905514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ntuey.gb1.brightbox.com,Uid:81a2bdd7e2846aa7030c307fe31ee9f9,Namespace:kube-system,Attempt:0,}" Oct 13 06:51:39.039678 containerd[1656]: time="2025-10-13T06:51:39.039521045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ntuey.gb1.brightbox.com,Uid:cf1ca1f8b80191211dbfa565027d0087,Namespace:kube-system,Attempt:0,}" Oct 13 06:51:39.043687 containerd[1656]: time="2025-10-13T06:51:39.043487223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ntuey.gb1.brightbox.com,Uid:22a536a2a6bd58ba407f82ae9abee176,Namespace:kube-system,Attempt:0,}" Oct 13 06:51:39.151530 containerd[1656]: time="2025-10-13T06:51:39.151394057Z" level=info msg="connecting to shim 788224bfd71ef2b739e0104ff7edc0b0d86af0ed228886ca7b324e97f3a1f573" address="unix:///run/containerd/s/64f8f5f3ddd42ed5b39628ac6663f5262dbce4496380d636340f4aab38af96eb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:51:39.152513 containerd[1656]: time="2025-10-13T06:51:39.152471329Z" level=info msg="connecting to shim 488108e05a36756a52370cbcf04e51b983d5a5d2b405ee99fb5bc9d7d7460358" address="unix:///run/containerd/s/f7fccb4a65738f2228fbd5a6521d7cdc7aa64d197178197c3cd2fdd87dc940cf" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:51:39.166031 kubelet[2586]: E1013 06:51:39.165968 2586 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.93.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ntuey.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.93.206:6443: connect: connection refused" interval="800ms" Oct 13 06:51:39.169404 containerd[1656]: time="2025-10-13T06:51:39.169356766Z" level=info msg="connecting to shim 9e5e881f2b7c891b702c430be9755f3c44d85508c3945700ac43415bf427b284" address="unix:///run/containerd/s/a0468ac48dee63ac88551d264f6689e552f60406334e0f6b373df587a87cc45c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:51:39.271292 systemd[1]: Started cri-containerd-488108e05a36756a52370cbcf04e51b983d5a5d2b405ee99fb5bc9d7d7460358.scope - libcontainer container 488108e05a36756a52370cbcf04e51b983d5a5d2b405ee99fb5bc9d7d7460358. Oct 13 06:51:39.281418 systemd[1]: Started cri-containerd-788224bfd71ef2b739e0104ff7edc0b0d86af0ed228886ca7b324e97f3a1f573.scope - libcontainer container 788224bfd71ef2b739e0104ff7edc0b0d86af0ed228886ca7b324e97f3a1f573. Oct 13 06:51:39.283029 systemd[1]: Started cri-containerd-9e5e881f2b7c891b702c430be9755f3c44d85508c3945700ac43415bf427b284.scope - libcontainer container 9e5e881f2b7c891b702c430be9755f3c44d85508c3945700ac43415bf427b284. Oct 13 06:51:39.348038 kubelet[2586]: E1013 06:51:39.347777 2586 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.93.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ntuey.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 06:51:39.359341 kubelet[2586]: I1013 06:51:39.358544 2586 kubelet_node_status.go:75] "Attempting to register node" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:39.359341 kubelet[2586]: E1013 06:51:39.358854 2586 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.93.206:6443/api/v1/nodes\": dial tcp 10.244.93.206:6443: connect: connection refused" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:39.390096 containerd[1656]: time="2025-10-13T06:51:39.390053693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ntuey.gb1.brightbox.com,Uid:81a2bdd7e2846aa7030c307fe31ee9f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"788224bfd71ef2b739e0104ff7edc0b0d86af0ed228886ca7b324e97f3a1f573\"" Oct 13 06:51:39.391677 containerd[1656]: time="2025-10-13T06:51:39.391479922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ntuey.gb1.brightbox.com,Uid:cf1ca1f8b80191211dbfa565027d0087,Namespace:kube-system,Attempt:0,} returns sandbox id \"488108e05a36756a52370cbcf04e51b983d5a5d2b405ee99fb5bc9d7d7460358\"" Oct 13 06:51:39.398385 containerd[1656]: time="2025-10-13T06:51:39.398331609Z" level=info msg="CreateContainer within sandbox \"488108e05a36756a52370cbcf04e51b983d5a5d2b405ee99fb5bc9d7d7460358\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 06:51:39.398645 containerd[1656]: time="2025-10-13T06:51:39.398217199Z" level=info msg="CreateContainer within sandbox \"788224bfd71ef2b739e0104ff7edc0b0d86af0ed228886ca7b324e97f3a1f573\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 06:51:39.406174 containerd[1656]: time="2025-10-13T06:51:39.406133876Z" level=info msg="Container 95fcde197e05b99874758b022dd0ce7dfdaa806ea913816b633b8da748ad4226: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:51:39.428254 containerd[1656]: time="2025-10-13T06:51:39.428210846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ntuey.gb1.brightbox.com,Uid:22a536a2a6bd58ba407f82ae9abee176,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e5e881f2b7c891b702c430be9755f3c44d85508c3945700ac43415bf427b284\"" Oct 13 06:51:39.428924 containerd[1656]: time="2025-10-13T06:51:39.428888304Z" level=info msg="Container 4d4be269548408fbf42116c0ab2e7e99a8f0c9fe205a25e1912048a03edd483b: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:51:39.432865 containerd[1656]: time="2025-10-13T06:51:39.432773640Z" level=info msg="CreateContainer within sandbox \"788224bfd71ef2b739e0104ff7edc0b0d86af0ed228886ca7b324e97f3a1f573\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"95fcde197e05b99874758b022dd0ce7dfdaa806ea913816b633b8da748ad4226\"" Oct 13 06:51:39.433606 containerd[1656]: time="2025-10-13T06:51:39.433573586Z" level=info msg="CreateContainer within sandbox \"9e5e881f2b7c891b702c430be9755f3c44d85508c3945700ac43415bf427b284\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 06:51:39.434210 containerd[1656]: time="2025-10-13T06:51:39.434189734Z" level=info msg="StartContainer for \"95fcde197e05b99874758b022dd0ce7dfdaa806ea913816b633b8da748ad4226\"" Oct 13 06:51:39.440436 containerd[1656]: time="2025-10-13T06:51:39.440376238Z" level=info msg="connecting to shim 95fcde197e05b99874758b022dd0ce7dfdaa806ea913816b633b8da748ad4226" address="unix:///run/containerd/s/64f8f5f3ddd42ed5b39628ac6663f5262dbce4496380d636340f4aab38af96eb" protocol=ttrpc version=3 Oct 13 06:51:39.443184 containerd[1656]: time="2025-10-13T06:51:39.443127368Z" level=info msg="CreateContainer within sandbox \"488108e05a36756a52370cbcf04e51b983d5a5d2b405ee99fb5bc9d7d7460358\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d4be269548408fbf42116c0ab2e7e99a8f0c9fe205a25e1912048a03edd483b\"" Oct 13 06:51:39.444137 containerd[1656]: time="2025-10-13T06:51:39.443511350Z" level=info msg="Container 2c5d2533f3aa74873ee56ce193b1f1560c8002ae8917c265400e7aff64bc3817: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:51:39.444345 containerd[1656]: time="2025-10-13T06:51:39.444329716Z" level=info msg="StartContainer for \"4d4be269548408fbf42116c0ab2e7e99a8f0c9fe205a25e1912048a03edd483b\"" Oct 13 06:51:39.445765 containerd[1656]: time="2025-10-13T06:51:39.445744607Z" level=info msg="connecting to shim 4d4be269548408fbf42116c0ab2e7e99a8f0c9fe205a25e1912048a03edd483b" address="unix:///run/containerd/s/f7fccb4a65738f2228fbd5a6521d7cdc7aa64d197178197c3cd2fdd87dc940cf" protocol=ttrpc version=3 Oct 13 06:51:39.450609 containerd[1656]: time="2025-10-13T06:51:39.450573006Z" level=info msg="CreateContainer within sandbox \"9e5e881f2b7c891b702c430be9755f3c44d85508c3945700ac43415bf427b284\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c5d2533f3aa74873ee56ce193b1f1560c8002ae8917c265400e7aff64bc3817\"" Oct 13 06:51:39.451326 containerd[1656]: time="2025-10-13T06:51:39.451307000Z" level=info msg="StartContainer for \"2c5d2533f3aa74873ee56ce193b1f1560c8002ae8917c265400e7aff64bc3817\"" Oct 13 06:51:39.453512 containerd[1656]: time="2025-10-13T06:51:39.453490524Z" level=info msg="connecting to shim 2c5d2533f3aa74873ee56ce193b1f1560c8002ae8917c265400e7aff64bc3817" address="unix:///run/containerd/s/a0468ac48dee63ac88551d264f6689e552f60406334e0f6b373df587a87cc45c" protocol=ttrpc version=3 Oct 13 06:51:39.464460 systemd[1]: Started cri-containerd-95fcde197e05b99874758b022dd0ce7dfdaa806ea913816b633b8da748ad4226.scope - libcontainer container 95fcde197e05b99874758b022dd0ce7dfdaa806ea913816b633b8da748ad4226. Oct 13 06:51:39.489340 systemd[1]: Started cri-containerd-4d4be269548408fbf42116c0ab2e7e99a8f0c9fe205a25e1912048a03edd483b.scope - libcontainer container 4d4be269548408fbf42116c0ab2e7e99a8f0c9fe205a25e1912048a03edd483b. Oct 13 06:51:39.493446 systemd[1]: Started cri-containerd-2c5d2533f3aa74873ee56ce193b1f1560c8002ae8917c265400e7aff64bc3817.scope - libcontainer container 2c5d2533f3aa74873ee56ce193b1f1560c8002ae8917c265400e7aff64bc3817. Oct 13 06:51:39.508861 kubelet[2586]: E1013 06:51:39.508826 2586 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.93.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 06:51:39.564942 containerd[1656]: time="2025-10-13T06:51:39.563921289Z" level=info msg="StartContainer for \"95fcde197e05b99874758b022dd0ce7dfdaa806ea913816b633b8da748ad4226\" returns successfully" Oct 13 06:51:39.573826 containerd[1656]: time="2025-10-13T06:51:39.573749100Z" level=info msg="StartContainer for \"4d4be269548408fbf42116c0ab2e7e99a8f0c9fe205a25e1912048a03edd483b\" returns successfully" Oct 13 06:51:39.613919 kubelet[2586]: E1013 06:51:39.613674 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:39.616960 kubelet[2586]: E1013 06:51:39.616649 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:39.624074 containerd[1656]: time="2025-10-13T06:51:39.624001463Z" level=info msg="StartContainer for \"2c5d2533f3aa74873ee56ce193b1f1560c8002ae8917c265400e7aff64bc3817\" returns successfully" Oct 13 06:51:39.830294 kubelet[2586]: E1013 06:51:39.830016 2586 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.93.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.93.206:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 06:51:40.163236 kubelet[2586]: I1013 06:51:40.161421 2586 kubelet_node_status.go:75] "Attempting to register node" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:40.626917 kubelet[2586]: E1013 06:51:40.626834 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:40.627573 kubelet[2586]: E1013 06:51:40.627321 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:41.626785 kubelet[2586]: E1013 06:51:41.626525 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:41.628637 kubelet[2586]: E1013 06:51:41.628410 2586 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.318582 kubelet[2586]: E1013 06:51:42.318523 2586 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-ntuey.gb1.brightbox.com\" not found" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.481185 kubelet[2586]: I1013 06:51:42.479658 2586 kubelet_node_status.go:78] "Successfully registered node" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.481185 kubelet[2586]: E1013 06:51:42.479745 2586 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-ntuey.gb1.brightbox.com\": node \"srv-ntuey.gb1.brightbox.com\" not found" Oct 13 06:51:42.527728 kubelet[2586]: I1013 06:51:42.527627 2586 apiserver.go:52] "Watching apiserver" Oct 13 06:51:42.560109 kubelet[2586]: I1013 06:51:42.560028 2586 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 06:51:42.560378 kubelet[2586]: I1013 06:51:42.560268 2586 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.571418 kubelet[2586]: E1013 06:51:42.571229 2586 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-ntuey.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.571418 kubelet[2586]: I1013 06:51:42.571309 2586 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.575171 kubelet[2586]: E1013 06:51:42.575087 2586 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.575305 kubelet[2586]: I1013 06:51:42.575209 2586 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.580832 kubelet[2586]: E1013 06:51:42.580755 2586 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-ntuey.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.627998 kubelet[2586]: I1013 06:51:42.627511 2586 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:42.632350 kubelet[2586]: E1013 06:51:42.631927 2586 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-ntuey.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:44.226140 kubelet[2586]: I1013 06:51:44.225949 2586 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:44.232579 kubelet[2586]: I1013 06:51:44.232265 2586 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 06:51:44.315024 systemd[1]: Reload requested from client PID 2867 ('systemctl') (unit session-11.scope)... Oct 13 06:51:44.315458 systemd[1]: Reloading... Oct 13 06:51:44.469970 zram_generator::config[2909]: No configuration found. Oct 13 06:51:44.823727 systemd[1]: Reloading finished in 507 ms. Oct 13 06:51:44.868126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:51:44.883357 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 06:51:44.884041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:44.884310 systemd[1]: kubelet.service: Consumed 1.258s CPU time, 126.9M memory peak. Oct 13 06:51:44.889287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 06:51:45.117204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 06:51:45.132630 (kubelet)[2977]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 06:51:45.215996 kubelet[2977]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:51:45.216520 kubelet[2977]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 06:51:45.216574 kubelet[2977]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 06:51:45.216712 kubelet[2977]: I1013 06:51:45.216674 2977 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 06:51:45.227967 kubelet[2977]: I1013 06:51:45.227847 2977 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 13 06:51:45.227967 kubelet[2977]: I1013 06:51:45.227918 2977 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 06:51:45.228428 kubelet[2977]: I1013 06:51:45.228403 2977 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 06:51:45.230348 kubelet[2977]: I1013 06:51:45.230204 2977 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 06:51:45.257848 kubelet[2977]: I1013 06:51:45.257376 2977 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 06:51:45.274610 kubelet[2977]: I1013 06:51:45.274582 2977 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 06:51:45.281487 kubelet[2977]: I1013 06:51:45.281458 2977 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 13 06:51:45.283731 kubelet[2977]: I1013 06:51:45.283697 2977 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 06:51:45.284034 kubelet[2977]: I1013 06:51:45.283831 2977 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ntuey.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 06:51:45.285805 kubelet[2977]: I1013 06:51:45.285673 2977 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 06:51:45.285805 kubelet[2977]: I1013 06:51:45.285704 2977 container_manager_linux.go:303] "Creating device plugin manager" Oct 13 06:51:45.285805 kubelet[2977]: I1013 06:51:45.285766 2977 state_mem.go:36] "Initialized new in-memory state store" Oct 13 06:51:45.286119 kubelet[2977]: I1013 06:51:45.286105 2977 kubelet.go:480] "Attempting to sync node with API server" Oct 13 06:51:45.286271 kubelet[2977]: I1013 06:51:45.286260 2977 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 06:51:45.288603 kubelet[2977]: I1013 06:51:45.288586 2977 kubelet.go:386] "Adding apiserver pod source" Oct 13 06:51:45.288773 kubelet[2977]: I1013 06:51:45.288700 2977 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 06:51:45.296461 kubelet[2977]: I1013 06:51:45.296429 2977 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 06:51:45.302800 kubelet[2977]: I1013 06:51:45.302764 2977 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 06:51:45.326123 kubelet[2977]: I1013 06:51:45.326090 2977 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 13 06:51:45.328295 kubelet[2977]: I1013 06:51:45.326176 2977 server.go:1289] "Started kubelet" Oct 13 06:51:45.328683 kubelet[2977]: I1013 06:51:45.328632 2977 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 06:51:45.330328 kubelet[2977]: I1013 06:51:45.329643 2977 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 06:51:45.331408 kubelet[2977]: I1013 06:51:45.330774 2977 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 06:51:45.335412 kubelet[2977]: I1013 06:51:45.335391 2977 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 06:51:45.339899 kubelet[2977]: I1013 06:51:45.338770 2977 server.go:317] "Adding debug handlers to kubelet server" Oct 13 06:51:45.356514 kubelet[2977]: I1013 06:51:45.356436 2977 factory.go:223] Registration of the systemd container factory successfully Oct 13 06:51:45.360807 kubelet[2977]: I1013 06:51:45.360035 2977 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 06:51:45.361965 kubelet[2977]: I1013 06:51:45.361925 2977 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 06:51:45.370372 kubelet[2977]: I1013 06:51:45.368342 2977 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 13 06:51:45.370372 kubelet[2977]: I1013 06:51:45.368517 2977 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 13 06:51:45.370372 kubelet[2977]: I1013 06:51:45.368732 2977 reconciler.go:26] "Reconciler: start to sync state" Oct 13 06:51:45.373177 kubelet[2977]: E1013 06:51:45.371712 2977 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 06:51:45.375107 kubelet[2977]: I1013 06:51:45.375089 2977 factory.go:223] Registration of the containerd container factory successfully Oct 13 06:51:45.418286 kubelet[2977]: I1013 06:51:45.418248 2977 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 13 06:51:45.442804 kubelet[2977]: I1013 06:51:45.442743 2977 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 13 06:51:45.443329 kubelet[2977]: I1013 06:51:45.442888 2977 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 13 06:51:45.443329 kubelet[2977]: I1013 06:51:45.442915 2977 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 06:51:45.443329 kubelet[2977]: I1013 06:51:45.442922 2977 kubelet.go:2436] "Starting kubelet main sync loop" Oct 13 06:51:45.444238 kubelet[2977]: E1013 06:51:45.444023 2977 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498494 2977 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498529 2977 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498550 2977 state_mem.go:36] "Initialized new in-memory state store" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498765 2977 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498778 2977 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498798 2977 policy_none.go:49] "None policy: Start" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498809 2977 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498821 2977 state_mem.go:35] "Initializing new in-memory state store" Oct 13 06:51:45.499998 kubelet[2977]: I1013 06:51:45.498953 2977 state_mem.go:75] "Updated machine memory state" Oct 13 06:51:45.525666 kubelet[2977]: E1013 06:51:45.525523 2977 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 06:51:45.528165 kubelet[2977]: I1013 06:51:45.527353 2977 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 06:51:45.528165 kubelet[2977]: I1013 06:51:45.527369 2977 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 06:51:45.530752 kubelet[2977]: I1013 06:51:45.530671 2977 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 06:51:45.535172 kubelet[2977]: E1013 06:51:45.533993 2977 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 06:51:45.549416 kubelet[2977]: I1013 06:51:45.549362 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.550645 kubelet[2977]: I1013 06:51:45.550280 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.552431 kubelet[2977]: I1013 06:51:45.550349 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.572348 kubelet[2977]: I1013 06:51:45.572298 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-ca-certs\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.572693 kubelet[2977]: I1013 06:51:45.572470 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-k8s-certs\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.572693 kubelet[2977]: I1013 06:51:45.572625 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.572877 kubelet[2977]: I1013 06:51:45.572780 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/22a536a2a6bd58ba407f82ae9abee176-kubeconfig\") pod \"kube-scheduler-srv-ntuey.gb1.brightbox.com\" (UID: \"22a536a2a6bd58ba407f82ae9abee176\") " pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.573565 kubelet[2977]: I1013 06:51:45.572914 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-flexvolume-dir\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.573565 kubelet[2977]: I1013 06:51:45.572933 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf1ca1f8b80191211dbfa565027d0087-kubeconfig\") pod \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" (UID: \"cf1ca1f8b80191211dbfa565027d0087\") " pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.573565 kubelet[2977]: I1013 06:51:45.572951 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81a2bdd7e2846aa7030c307fe31ee9f9-ca-certs\") pod \"kube-apiserver-srv-ntuey.gb1.brightbox.com\" (UID: \"81a2bdd7e2846aa7030c307fe31ee9f9\") " pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.573565 kubelet[2977]: I1013 06:51:45.573266 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81a2bdd7e2846aa7030c307fe31ee9f9-k8s-certs\") pod \"kube-apiserver-srv-ntuey.gb1.brightbox.com\" (UID: \"81a2bdd7e2846aa7030c307fe31ee9f9\") " pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.573565 kubelet[2977]: I1013 06:51:45.573297 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81a2bdd7e2846aa7030c307fe31ee9f9-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ntuey.gb1.brightbox.com\" (UID: \"81a2bdd7e2846aa7030c307fe31ee9f9\") " pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.586503 kubelet[2977]: I1013 06:51:45.586442 2977 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 06:51:45.586813 kubelet[2977]: I1013 06:51:45.586794 2977 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 06:51:45.590871 kubelet[2977]: I1013 06:51:45.590747 2977 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 06:51:45.591150 kubelet[2977]: E1013 06:51:45.591076 2977 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.654802 kubelet[2977]: I1013 06:51:45.654508 2977 kubelet_node_status.go:75] "Attempting to register node" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.666579 kubelet[2977]: I1013 06:51:45.666028 2977 kubelet_node_status.go:124] "Node was previously registered" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:45.666579 kubelet[2977]: I1013 06:51:45.666112 2977 kubelet_node_status.go:78] "Successfully registered node" node="srv-ntuey.gb1.brightbox.com" Oct 13 06:51:46.299496 kubelet[2977]: I1013 06:51:46.299449 2977 apiserver.go:52] "Watching apiserver" Oct 13 06:51:46.369347 kubelet[2977]: I1013 06:51:46.369299 2977 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 13 06:51:46.436389 kubelet[2977]: I1013 06:51:46.436218 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" podStartSLOduration=1.436189758 podStartE2EDuration="1.436189758s" podCreationTimestamp="2025-10-13 06:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:51:46.435029759 +0000 UTC m=+1.285411319" watchObservedRunningTime="2025-10-13 06:51:46.436189758 +0000 UTC m=+1.286571310" Oct 13 06:51:46.445882 kubelet[2977]: I1013 06:51:46.445824 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-ntuey.gb1.brightbox.com" podStartSLOduration=1.445802748 podStartE2EDuration="1.445802748s" podCreationTimestamp="2025-10-13 06:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:51:46.443646348 +0000 UTC m=+1.294027908" watchObservedRunningTime="2025-10-13 06:51:46.445802748 +0000 UTC m=+1.296184287" Oct 13 06:51:46.455586 kubelet[2977]: I1013 06:51:46.455399 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" podStartSLOduration=2.455381051 podStartE2EDuration="2.455381051s" podCreationTimestamp="2025-10-13 06:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:51:46.4539787 +0000 UTC m=+1.304360304" watchObservedRunningTime="2025-10-13 06:51:46.455381051 +0000 UTC m=+1.305762587" Oct 13 06:51:46.488095 kubelet[2977]: I1013 06:51:46.488037 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:46.488991 kubelet[2977]: I1013 06:51:46.488948 2977 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:46.500741 kubelet[2977]: I1013 06:51:46.500662 2977 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 06:51:46.500741 kubelet[2977]: E1013 06:51:46.500727 2977 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-ntuey.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:46.501446 kubelet[2977]: I1013 06:51:46.501422 2977 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 13 06:51:46.501556 kubelet[2977]: E1013 06:51:46.501477 2977 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-ntuey.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-ntuey.gb1.brightbox.com" Oct 13 06:51:49.275830 kubelet[2977]: I1013 06:51:49.275781 2977 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 06:51:49.276912 containerd[1656]: time="2025-10-13T06:51:49.276832510Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 06:51:49.277435 kubelet[2977]: I1013 06:51:49.277408 2977 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 06:51:50.298008 systemd[1]: Created slice kubepods-besteffort-podfc4e9517_529c_4187_ad1c_33a223d92dd6.slice - libcontainer container kubepods-besteffort-podfc4e9517_529c_4187_ad1c_33a223d92dd6.slice. Oct 13 06:51:50.310412 kubelet[2977]: I1013 06:51:50.310361 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fc4e9517-529c-4187-ad1c-33a223d92dd6-kube-proxy\") pod \"kube-proxy-cn4l7\" (UID: \"fc4e9517-529c-4187-ad1c-33a223d92dd6\") " pod="kube-system/kube-proxy-cn4l7" Oct 13 06:51:50.310412 kubelet[2977]: I1013 06:51:50.310411 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc4e9517-529c-4187-ad1c-33a223d92dd6-xtables-lock\") pod \"kube-proxy-cn4l7\" (UID: \"fc4e9517-529c-4187-ad1c-33a223d92dd6\") " pod="kube-system/kube-proxy-cn4l7" Oct 13 06:51:50.310930 kubelet[2977]: I1013 06:51:50.310436 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc4e9517-529c-4187-ad1c-33a223d92dd6-lib-modules\") pod \"kube-proxy-cn4l7\" (UID: \"fc4e9517-529c-4187-ad1c-33a223d92dd6\") " pod="kube-system/kube-proxy-cn4l7" Oct 13 06:51:50.310930 kubelet[2977]: I1013 06:51:50.310458 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4gs7\" (UniqueName: \"kubernetes.io/projected/fc4e9517-529c-4187-ad1c-33a223d92dd6-kube-api-access-s4gs7\") pod \"kube-proxy-cn4l7\" (UID: \"fc4e9517-529c-4187-ad1c-33a223d92dd6\") " pod="kube-system/kube-proxy-cn4l7" Oct 13 06:51:50.503858 systemd[1]: Created slice kubepods-besteffort-pod181fb754_2f87_4349_8d5b_a1337dc63218.slice - libcontainer container kubepods-besteffort-pod181fb754_2f87_4349_8d5b_a1337dc63218.slice. Oct 13 06:51:50.512717 kubelet[2977]: I1013 06:51:50.512060 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/181fb754-2f87-4349-8d5b-a1337dc63218-var-lib-calico\") pod \"tigera-operator-755d956888-bsd97\" (UID: \"181fb754-2f87-4349-8d5b-a1337dc63218\") " pod="tigera-operator/tigera-operator-755d956888-bsd97" Oct 13 06:51:50.512717 kubelet[2977]: I1013 06:51:50.512216 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhxdc\" (UniqueName: \"kubernetes.io/projected/181fb754-2f87-4349-8d5b-a1337dc63218-kube-api-access-xhxdc\") pod \"tigera-operator-755d956888-bsd97\" (UID: \"181fb754-2f87-4349-8d5b-a1337dc63218\") " pod="tigera-operator/tigera-operator-755d956888-bsd97" Oct 13 06:51:50.609186 containerd[1656]: time="2025-10-13T06:51:50.607987164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cn4l7,Uid:fc4e9517-529c-4187-ad1c-33a223d92dd6,Namespace:kube-system,Attempt:0,}" Oct 13 06:51:50.645382 containerd[1656]: time="2025-10-13T06:51:50.645078262Z" level=info msg="connecting to shim 682ec3d6575a16c4764b1a1892bd05b5c24345045d02f0f88ba2205ac2345dd1" address="unix:///run/containerd/s/c222a55bd87f49e342e0ee80b9f42c8118a07c83e3ab2fb45797e3d8d6dd9369" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:51:50.679382 systemd[1]: Started cri-containerd-682ec3d6575a16c4764b1a1892bd05b5c24345045d02f0f88ba2205ac2345dd1.scope - libcontainer container 682ec3d6575a16c4764b1a1892bd05b5c24345045d02f0f88ba2205ac2345dd1. Oct 13 06:51:50.719887 containerd[1656]: time="2025-10-13T06:51:50.719815353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cn4l7,Uid:fc4e9517-529c-4187-ad1c-33a223d92dd6,Namespace:kube-system,Attempt:0,} returns sandbox id \"682ec3d6575a16c4764b1a1892bd05b5c24345045d02f0f88ba2205ac2345dd1\"" Oct 13 06:51:50.726954 containerd[1656]: time="2025-10-13T06:51:50.726697189Z" level=info msg="CreateContainer within sandbox \"682ec3d6575a16c4764b1a1892bd05b5c24345045d02f0f88ba2205ac2345dd1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 06:51:50.739199 containerd[1656]: time="2025-10-13T06:51:50.738083471Z" level=info msg="Container be26609ec9151f19c9b33bbea8b96100daba1881e7e93e9acb764eca15578aa6: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:51:50.753296 containerd[1656]: time="2025-10-13T06:51:50.753216594Z" level=info msg="CreateContainer within sandbox \"682ec3d6575a16c4764b1a1892bd05b5c24345045d02f0f88ba2205ac2345dd1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be26609ec9151f19c9b33bbea8b96100daba1881e7e93e9acb764eca15578aa6\"" Oct 13 06:51:50.755639 containerd[1656]: time="2025-10-13T06:51:50.755550793Z" level=info msg="StartContainer for \"be26609ec9151f19c9b33bbea8b96100daba1881e7e93e9acb764eca15578aa6\"" Oct 13 06:51:50.760449 containerd[1656]: time="2025-10-13T06:51:50.760378581Z" level=info msg="connecting to shim be26609ec9151f19c9b33bbea8b96100daba1881e7e93e9acb764eca15578aa6" address="unix:///run/containerd/s/c222a55bd87f49e342e0ee80b9f42c8118a07c83e3ab2fb45797e3d8d6dd9369" protocol=ttrpc version=3 Oct 13 06:51:50.796541 systemd[1]: Started cri-containerd-be26609ec9151f19c9b33bbea8b96100daba1881e7e93e9acb764eca15578aa6.scope - libcontainer container be26609ec9151f19c9b33bbea8b96100daba1881e7e93e9acb764eca15578aa6. Oct 13 06:51:50.810190 containerd[1656]: time="2025-10-13T06:51:50.810000401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bsd97,Uid:181fb754-2f87-4349-8d5b-a1337dc63218,Namespace:tigera-operator,Attempt:0,}" Oct 13 06:51:50.824730 containerd[1656]: time="2025-10-13T06:51:50.824683861Z" level=info msg="connecting to shim ec7bcf1a4a0f31583625de1199b40f63d47ed2d74a86c63b1acbe47db8ed1931" address="unix:///run/containerd/s/44e48973a333145963992f9bcb387a9df127509c4197c1864dbaf67a8393240d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:51:50.862483 systemd[1]: Started cri-containerd-ec7bcf1a4a0f31583625de1199b40f63d47ed2d74a86c63b1acbe47db8ed1931.scope - libcontainer container ec7bcf1a4a0f31583625de1199b40f63d47ed2d74a86c63b1acbe47db8ed1931. Oct 13 06:51:50.870602 containerd[1656]: time="2025-10-13T06:51:50.870501536Z" level=info msg="StartContainer for \"be26609ec9151f19c9b33bbea8b96100daba1881e7e93e9acb764eca15578aa6\" returns successfully" Oct 13 06:51:50.952799 containerd[1656]: time="2025-10-13T06:51:50.952747157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-bsd97,Uid:181fb754-2f87-4349-8d5b-a1337dc63218,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ec7bcf1a4a0f31583625de1199b40f63d47ed2d74a86c63b1acbe47db8ed1931\"" Oct 13 06:51:50.955293 containerd[1656]: time="2025-10-13T06:51:50.955015801Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 06:51:52.520022 kubelet[2977]: I1013 06:51:52.519427 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cn4l7" podStartSLOduration=2.519404801 podStartE2EDuration="2.519404801s" podCreationTimestamp="2025-10-13 06:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:51:51.526609399 +0000 UTC m=+6.376991044" watchObservedRunningTime="2025-10-13 06:51:52.519404801 +0000 UTC m=+7.369786360" Oct 13 06:51:52.967909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747590770.mount: Deactivated successfully. Oct 13 06:51:53.866043 containerd[1656]: time="2025-10-13T06:51:53.865504948Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:53.867416 containerd[1656]: time="2025-10-13T06:51:53.867362511Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 06:51:53.867661 containerd[1656]: time="2025-10-13T06:51:53.867544438Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:53.872281 containerd[1656]: time="2025-10-13T06:51:53.872225358Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:51:53.872842 containerd[1656]: time="2025-10-13T06:51:53.872816780Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.917744247s" Oct 13 06:51:53.872952 containerd[1656]: time="2025-10-13T06:51:53.872935568Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 06:51:53.877591 containerd[1656]: time="2025-10-13T06:51:53.877553318Z" level=info msg="CreateContainer within sandbox \"ec7bcf1a4a0f31583625de1199b40f63d47ed2d74a86c63b1acbe47db8ed1931\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 06:51:53.891351 containerd[1656]: time="2025-10-13T06:51:53.889625656Z" level=info msg="Container 56f879959a40c448df2207a4b8a989ad1dfb747497f70f48b894f1136329f3cb: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:51:53.892825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381787996.mount: Deactivated successfully. Oct 13 06:51:53.896777 containerd[1656]: time="2025-10-13T06:51:53.896694143Z" level=info msg="CreateContainer within sandbox \"ec7bcf1a4a0f31583625de1199b40f63d47ed2d74a86c63b1acbe47db8ed1931\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"56f879959a40c448df2207a4b8a989ad1dfb747497f70f48b894f1136329f3cb\"" Oct 13 06:51:53.897481 containerd[1656]: time="2025-10-13T06:51:53.897459235Z" level=info msg="StartContainer for \"56f879959a40c448df2207a4b8a989ad1dfb747497f70f48b894f1136329f3cb\"" Oct 13 06:51:53.898688 containerd[1656]: time="2025-10-13T06:51:53.898525514Z" level=info msg="connecting to shim 56f879959a40c448df2207a4b8a989ad1dfb747497f70f48b894f1136329f3cb" address="unix:///run/containerd/s/44e48973a333145963992f9bcb387a9df127509c4197c1864dbaf67a8393240d" protocol=ttrpc version=3 Oct 13 06:51:53.929356 systemd[1]: Started cri-containerd-56f879959a40c448df2207a4b8a989ad1dfb747497f70f48b894f1136329f3cb.scope - libcontainer container 56f879959a40c448df2207a4b8a989ad1dfb747497f70f48b894f1136329f3cb. Oct 13 06:51:53.965864 containerd[1656]: time="2025-10-13T06:51:53.965828296Z" level=info msg="StartContainer for \"56f879959a40c448df2207a4b8a989ad1dfb747497f70f48b894f1136329f3cb\" returns successfully" Oct 13 06:51:54.538449 kubelet[2977]: I1013 06:51:54.538340 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-bsd97" podStartSLOduration=1.618814407 podStartE2EDuration="4.538292303s" podCreationTimestamp="2025-10-13 06:51:50 +0000 UTC" firstStartedPulling="2025-10-13 06:51:50.954564194 +0000 UTC m=+5.804945761" lastFinishedPulling="2025-10-13 06:51:53.874042111 +0000 UTC m=+8.724423657" observedRunningTime="2025-10-13 06:51:54.536951969 +0000 UTC m=+9.387333588" watchObservedRunningTime="2025-10-13 06:51:54.538292303 +0000 UTC m=+9.388673956" Oct 13 06:52:00.989069 sudo[1977]: pam_unix(sudo:session): session closed for user root Oct 13 06:52:01.137173 sshd[1976]: Connection closed by 139.178.68.195 port 58226 Oct 13 06:52:01.136790 sshd-session[1973]: pam_unix(sshd:session): session closed for user core Oct 13 06:52:01.145255 systemd[1]: sshd@8-10.244.93.206:22-139.178.68.195:58226.service: Deactivated successfully. Oct 13 06:52:01.148979 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 06:52:01.151519 systemd[1]: session-11.scope: Consumed 6.179s CPU time, 154.3M memory peak. Oct 13 06:52:01.157038 systemd-logind[1638]: Session 11 logged out. Waiting for processes to exit. Oct 13 06:52:01.157964 systemd-logind[1638]: Removed session 11. Oct 13 06:52:05.164521 systemd[1]: Created slice kubepods-besteffort-podbf82b694_49ca_4751_aa1d_998d3d0c59ae.slice - libcontainer container kubepods-besteffort-podbf82b694_49ca_4751_aa1d_998d3d0c59ae.slice. Oct 13 06:52:05.222504 kubelet[2977]: I1013 06:52:05.222456 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf82b694-49ca-4751-aa1d-998d3d0c59ae-typha-certs\") pod \"calico-typha-6b98fb549d-k2vmp\" (UID: \"bf82b694-49ca-4751-aa1d-998d3d0c59ae\") " pod="calico-system/calico-typha-6b98fb549d-k2vmp" Oct 13 06:52:05.223496 kubelet[2977]: I1013 06:52:05.223393 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725c9\" (UniqueName: \"kubernetes.io/projected/bf82b694-49ca-4751-aa1d-998d3d0c59ae-kube-api-access-725c9\") pod \"calico-typha-6b98fb549d-k2vmp\" (UID: \"bf82b694-49ca-4751-aa1d-998d3d0c59ae\") " pod="calico-system/calico-typha-6b98fb549d-k2vmp" Oct 13 06:52:05.223496 kubelet[2977]: I1013 06:52:05.223454 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf82b694-49ca-4751-aa1d-998d3d0c59ae-tigera-ca-bundle\") pod \"calico-typha-6b98fb549d-k2vmp\" (UID: \"bf82b694-49ca-4751-aa1d-998d3d0c59ae\") " pod="calico-system/calico-typha-6b98fb549d-k2vmp" Oct 13 06:52:05.461586 systemd[1]: Created slice kubepods-besteffort-pod1f5cda82_a1e5_4275_b60d_fdfce19b6938.slice - libcontainer container kubepods-besteffort-pod1f5cda82_a1e5_4275_b60d_fdfce19b6938.slice. Oct 13 06:52:05.468273 containerd[1656]: time="2025-10-13T06:52:05.468188970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b98fb549d-k2vmp,Uid:bf82b694-49ca-4751-aa1d-998d3d0c59ae,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:05.510784 containerd[1656]: time="2025-10-13T06:52:05.510678770Z" level=info msg="connecting to shim d2a418d19c5b37dc037a2f49172f59f161bf990c0d9bf876662ebffa8ba984fa" address="unix:///run/containerd/s/21854af640ae5d10d03ff8496452b03a21938c3aeab0bf1dd22b1ef34645130c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:05.529055 kubelet[2977]: I1013 06:52:05.528584 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1f5cda82-a1e5-4275-b60d-fdfce19b6938-node-certs\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529055 kubelet[2977]: I1013 06:52:05.528631 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxb2\" (UniqueName: \"kubernetes.io/projected/1f5cda82-a1e5-4275-b60d-fdfce19b6938-kube-api-access-nlxb2\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529055 kubelet[2977]: I1013 06:52:05.528656 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-cni-log-dir\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529055 kubelet[2977]: I1013 06:52:05.528681 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-cni-net-dir\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529055 kubelet[2977]: I1013 06:52:05.528699 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-lib-modules\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529869 kubelet[2977]: I1013 06:52:05.528718 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f5cda82-a1e5-4275-b60d-fdfce19b6938-tigera-ca-bundle\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529869 kubelet[2977]: I1013 06:52:05.528736 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-cni-bin-dir\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529869 kubelet[2977]: I1013 06:52:05.528753 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-xtables-lock\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529869 kubelet[2977]: I1013 06:52:05.528773 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-flexvol-driver-host\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.529869 kubelet[2977]: I1013 06:52:05.528800 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-var-run-calico\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.530018 kubelet[2977]: I1013 06:52:05.528821 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-policysync\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.530018 kubelet[2977]: I1013 06:52:05.528844 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f5cda82-a1e5-4275-b60d-fdfce19b6938-var-lib-calico\") pod \"calico-node-jcbjn\" (UID: \"1f5cda82-a1e5-4275-b60d-fdfce19b6938\") " pod="calico-system/calico-node-jcbjn" Oct 13 06:52:05.575353 systemd[1]: Started cri-containerd-d2a418d19c5b37dc037a2f49172f59f161bf990c0d9bf876662ebffa8ba984fa.scope - libcontainer container d2a418d19c5b37dc037a2f49172f59f161bf990c0d9bf876662ebffa8ba984fa. Oct 13 06:52:05.635123 kubelet[2977]: E1013 06:52:05.635032 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.635123 kubelet[2977]: W1013 06:52:05.635068 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.635678 kubelet[2977]: E1013 06:52:05.635485 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.649641 kubelet[2977]: E1013 06:52:05.649607 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.650286 kubelet[2977]: W1013 06:52:05.649952 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.650286 kubelet[2977]: E1013 06:52:05.649989 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.656244 kubelet[2977]: E1013 06:52:05.656211 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.656244 kubelet[2977]: W1013 06:52:05.656240 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.656391 kubelet[2977]: E1013 06:52:05.656265 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.710523 containerd[1656]: time="2025-10-13T06:52:05.710397773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b98fb549d-k2vmp,Uid:bf82b694-49ca-4751-aa1d-998d3d0c59ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2a418d19c5b37dc037a2f49172f59f161bf990c0d9bf876662ebffa8ba984fa\"" Oct 13 06:52:05.715367 containerd[1656]: time="2025-10-13T06:52:05.715093053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 06:52:05.736305 kubelet[2977]: E1013 06:52:05.736258 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:05.765601 containerd[1656]: time="2025-10-13T06:52:05.765158640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jcbjn,Uid:1f5cda82-a1e5-4275-b60d-fdfce19b6938,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:05.791899 containerd[1656]: time="2025-10-13T06:52:05.791748477Z" level=info msg="connecting to shim e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f" address="unix:///run/containerd/s/d5d0b97b7e221bf77ac6e46a9e51eb03e7d94c8b4f008ec8fe3d098f7481769b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:05.799136 kubelet[2977]: E1013 06:52:05.798968 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.799136 kubelet[2977]: W1013 06:52:05.799008 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.799136 kubelet[2977]: E1013 06:52:05.799031 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.800246 kubelet[2977]: E1013 06:52:05.800201 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.800246 kubelet[2977]: W1013 06:52:05.800216 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.800657 kubelet[2977]: E1013 06:52:05.800467 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.801415 kubelet[2977]: E1013 06:52:05.801270 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.801415 kubelet[2977]: W1013 06:52:05.801334 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.801415 kubelet[2977]: E1013 06:52:05.801350 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.810152 kubelet[2977]: E1013 06:52:05.810035 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.810152 kubelet[2977]: W1013 06:52:05.810051 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.810152 kubelet[2977]: E1013 06:52:05.810066 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.810684 kubelet[2977]: E1013 06:52:05.810578 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.810684 kubelet[2977]: W1013 06:52:05.810591 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.810684 kubelet[2977]: E1013 06:52:05.810606 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.811111 kubelet[2977]: E1013 06:52:05.811097 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.811235 kubelet[2977]: W1013 06:52:05.811180 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.811235 kubelet[2977]: E1013 06:52:05.811195 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.813356 kubelet[2977]: E1013 06:52:05.812947 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.813356 kubelet[2977]: W1013 06:52:05.812962 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.813356 kubelet[2977]: E1013 06:52:05.812975 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.813356 kubelet[2977]: E1013 06:52:05.813175 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.813356 kubelet[2977]: W1013 06:52:05.813183 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.813356 kubelet[2977]: E1013 06:52:05.813203 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.813741 kubelet[2977]: E1013 06:52:05.813641 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.813741 kubelet[2977]: W1013 06:52:05.813653 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.813741 kubelet[2977]: E1013 06:52:05.813664 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.814448 kubelet[2977]: E1013 06:52:05.814210 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.814448 kubelet[2977]: W1013 06:52:05.814223 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.814448 kubelet[2977]: E1013 06:52:05.814234 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.814448 kubelet[2977]: E1013 06:52:05.814388 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.814448 kubelet[2977]: W1013 06:52:05.814394 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.814448 kubelet[2977]: E1013 06:52:05.814402 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.814936 kubelet[2977]: E1013 06:52:05.814822 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.814936 kubelet[2977]: W1013 06:52:05.814863 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.814936 kubelet[2977]: E1013 06:52:05.814876 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.815589 kubelet[2977]: E1013 06:52:05.815292 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.815589 kubelet[2977]: W1013 06:52:05.815303 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.815589 kubelet[2977]: E1013 06:52:05.815314 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.815862 kubelet[2977]: E1013 06:52:05.815803 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.815862 kubelet[2977]: W1013 06:52:05.815815 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.815862 kubelet[2977]: E1013 06:52:05.815826 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.816371 kubelet[2977]: E1013 06:52:05.816219 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.816371 kubelet[2977]: W1013 06:52:05.816247 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.816371 kubelet[2977]: E1013 06:52:05.816258 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.816841 kubelet[2977]: E1013 06:52:05.816768 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.816841 kubelet[2977]: W1013 06:52:05.816780 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.816841 kubelet[2977]: E1013 06:52:05.816791 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.817294 kubelet[2977]: E1013 06:52:05.817281 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.817554 kubelet[2977]: W1013 06:52:05.817369 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.817554 kubelet[2977]: E1013 06:52:05.817385 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.818687 kubelet[2977]: E1013 06:52:05.818299 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.818687 kubelet[2977]: W1013 06:52:05.818312 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.818687 kubelet[2977]: E1013 06:52:05.818324 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.818687 kubelet[2977]: E1013 06:52:05.818506 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.818687 kubelet[2977]: W1013 06:52:05.818513 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.818687 kubelet[2977]: E1013 06:52:05.818548 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.819214 kubelet[2977]: E1013 06:52:05.819025 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.819214 kubelet[2977]: W1013 06:52:05.819036 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.819214 kubelet[2977]: E1013 06:52:05.819047 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.838196 kubelet[2977]: E1013 06:52:05.837709 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.838196 kubelet[2977]: W1013 06:52:05.837732 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.838997 kubelet[2977]: E1013 06:52:05.838977 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.839233 kubelet[2977]: I1013 06:52:05.839126 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7r69\" (UniqueName: \"kubernetes.io/projected/917edfcb-100c-4720-b0de-c02da5d69423-kube-api-access-q7r69\") pod \"csi-node-driver-vjdvq\" (UID: \"917edfcb-100c-4720-b0de-c02da5d69423\") " pod="calico-system/csi-node-driver-vjdvq" Oct 13 06:52:05.839513 kubelet[2977]: E1013 06:52:05.839500 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.839672 kubelet[2977]: W1013 06:52:05.839567 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.839672 kubelet[2977]: E1013 06:52:05.839583 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.839672 kubelet[2977]: I1013 06:52:05.839611 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/917edfcb-100c-4720-b0de-c02da5d69423-kubelet-dir\") pod \"csi-node-driver-vjdvq\" (UID: \"917edfcb-100c-4720-b0de-c02da5d69423\") " pod="calico-system/csi-node-driver-vjdvq" Oct 13 06:52:05.840236 kubelet[2977]: E1013 06:52:05.840182 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.840236 kubelet[2977]: W1013 06:52:05.840197 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.840236 kubelet[2977]: E1013 06:52:05.840211 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.840465 kubelet[2977]: I1013 06:52:05.840373 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/917edfcb-100c-4720-b0de-c02da5d69423-varrun\") pod \"csi-node-driver-vjdvq\" (UID: \"917edfcb-100c-4720-b0de-c02da5d69423\") " pod="calico-system/csi-node-driver-vjdvq" Oct 13 06:52:05.840784 kubelet[2977]: E1013 06:52:05.840625 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.840784 kubelet[2977]: W1013 06:52:05.840638 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.840784 kubelet[2977]: E1013 06:52:05.840655 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.840784 kubelet[2977]: I1013 06:52:05.840681 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/917edfcb-100c-4720-b0de-c02da5d69423-socket-dir\") pod \"csi-node-driver-vjdvq\" (UID: \"917edfcb-100c-4720-b0de-c02da5d69423\") " pod="calico-system/csi-node-driver-vjdvq" Oct 13 06:52:05.841088 kubelet[2977]: E1013 06:52:05.841076 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.842997 kubelet[2977]: W1013 06:52:05.841168 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.842997 kubelet[2977]: E1013 06:52:05.841185 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.842997 kubelet[2977]: I1013 06:52:05.841210 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/917edfcb-100c-4720-b0de-c02da5d69423-registration-dir\") pod \"csi-node-driver-vjdvq\" (UID: \"917edfcb-100c-4720-b0de-c02da5d69423\") " pod="calico-system/csi-node-driver-vjdvq" Oct 13 06:52:05.842380 systemd[1]: Started cri-containerd-e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f.scope - libcontainer container e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f. Oct 13 06:52:05.844779 kubelet[2977]: E1013 06:52:05.844763 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.844887 kubelet[2977]: W1013 06:52:05.844874 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.844957 kubelet[2977]: E1013 06:52:05.844948 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.845239 kubelet[2977]: E1013 06:52:05.845210 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.845391 kubelet[2977]: W1013 06:52:05.845293 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.845391 kubelet[2977]: E1013 06:52:05.845320 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.845700 kubelet[2977]: E1013 06:52:05.845665 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.845700 kubelet[2977]: W1013 06:52:05.845683 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.845877 kubelet[2977]: E1013 06:52:05.845794 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.846666 kubelet[2977]: E1013 06:52:05.846107 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.846666 kubelet[2977]: W1013 06:52:05.846118 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.846666 kubelet[2977]: E1013 06:52:05.846133 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.847869 kubelet[2977]: E1013 06:52:05.847260 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.848051 kubelet[2977]: W1013 06:52:05.847961 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.848128 kubelet[2977]: E1013 06:52:05.848117 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.849482 kubelet[2977]: E1013 06:52:05.848858 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.849687 kubelet[2977]: W1013 06:52:05.849577 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.849687 kubelet[2977]: E1013 06:52:05.849595 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.851162 kubelet[2977]: E1013 06:52:05.851122 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.851280 kubelet[2977]: W1013 06:52:05.851255 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.851340 kubelet[2977]: E1013 06:52:05.851331 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.857174 kubelet[2977]: E1013 06:52:05.856219 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.859203 kubelet[2977]: W1013 06:52:05.857271 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.859203 kubelet[2977]: E1013 06:52:05.857297 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.862024 kubelet[2977]: E1013 06:52:05.862009 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.862128 kubelet[2977]: W1013 06:52:05.862116 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.862209 kubelet[2977]: E1013 06:52:05.862193 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.862496 kubelet[2977]: E1013 06:52:05.862486 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.862571 kubelet[2977]: W1013 06:52:05.862550 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.862648 kubelet[2977]: E1013 06:52:05.862639 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.934975 containerd[1656]: time="2025-10-13T06:52:05.934919607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jcbjn,Uid:1f5cda82-a1e5-4275-b60d-fdfce19b6938,Namespace:calico-system,Attempt:0,} returns sandbox id \"e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f\"" Oct 13 06:52:05.942946 kubelet[2977]: E1013 06:52:05.942920 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.944007 kubelet[2977]: W1013 06:52:05.943983 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.944168 kubelet[2977]: E1013 06:52:05.944132 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.944610 kubelet[2977]: E1013 06:52:05.944596 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.944715 kubelet[2977]: W1013 06:52:05.944704 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.944785 kubelet[2977]: E1013 06:52:05.944777 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.945497 kubelet[2977]: E1013 06:52:05.945478 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.945662 kubelet[2977]: W1013 06:52:05.945595 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.946199 kubelet[2977]: E1013 06:52:05.946181 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.946549 kubelet[2977]: E1013 06:52:05.946538 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.946630 kubelet[2977]: W1013 06:52:05.946621 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.946771 kubelet[2977]: E1013 06:52:05.946761 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.947758 kubelet[2977]: E1013 06:52:05.947586 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.948098 kubelet[2977]: W1013 06:52:05.948080 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.948221 kubelet[2977]: E1013 06:52:05.948211 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.949156 kubelet[2977]: E1013 06:52:05.948925 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.949259 kubelet[2977]: W1013 06:52:05.949246 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.949336 kubelet[2977]: E1013 06:52:05.949327 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.949988 kubelet[2977]: E1013 06:52:05.949972 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.950081 kubelet[2977]: W1013 06:52:05.950070 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.950286 kubelet[2977]: E1013 06:52:05.950130 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.951101 kubelet[2977]: E1013 06:52:05.951086 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.951225 kubelet[2977]: W1013 06:52:05.951212 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.951288 kubelet[2977]: E1013 06:52:05.951278 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.952634 kubelet[2977]: E1013 06:52:05.952537 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.952634 kubelet[2977]: W1013 06:52:05.952573 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.952634 kubelet[2977]: E1013 06:52:05.952604 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.954113 kubelet[2977]: E1013 06:52:05.954082 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.954222 kubelet[2977]: W1013 06:52:05.954115 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.954222 kubelet[2977]: E1013 06:52:05.954167 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.954667 kubelet[2977]: E1013 06:52:05.954630 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.954727 kubelet[2977]: W1013 06:52:05.954672 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.954727 kubelet[2977]: E1013 06:52:05.954697 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.955123 kubelet[2977]: E1013 06:52:05.955104 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.955184 kubelet[2977]: W1013 06:52:05.955129 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.955225 kubelet[2977]: E1013 06:52:05.955182 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.955587 kubelet[2977]: E1013 06:52:05.955569 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.955624 kubelet[2977]: W1013 06:52:05.955593 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.955624 kubelet[2977]: E1013 06:52:05.955615 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.956014 kubelet[2977]: E1013 06:52:05.955996 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.956014 kubelet[2977]: W1013 06:52:05.956020 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.956128 kubelet[2977]: E1013 06:52:05.956040 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.956555 kubelet[2977]: E1013 06:52:05.956536 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.956594 kubelet[2977]: W1013 06:52:05.956562 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.956594 kubelet[2977]: E1013 06:52:05.956583 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.957596 kubelet[2977]: E1013 06:52:05.957571 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.957679 kubelet[2977]: W1013 06:52:05.957601 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.957679 kubelet[2977]: E1013 06:52:05.957628 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.959556 kubelet[2977]: E1013 06:52:05.958325 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.959556 kubelet[2977]: W1013 06:52:05.958340 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.959556 kubelet[2977]: E1013 06:52:05.958352 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.959556 kubelet[2977]: E1013 06:52:05.958699 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.959556 kubelet[2977]: W1013 06:52:05.958710 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.959556 kubelet[2977]: E1013 06:52:05.958722 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.960029 kubelet[2977]: E1013 06:52:05.960011 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.960029 kubelet[2977]: W1013 06:52:05.960028 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.960105 kubelet[2977]: E1013 06:52:05.960043 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.960515 kubelet[2977]: E1013 06:52:05.960503 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.960559 kubelet[2977]: W1013 06:52:05.960516 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.960559 kubelet[2977]: E1013 06:52:05.960528 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.961014 kubelet[2977]: E1013 06:52:05.960999 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.961014 kubelet[2977]: W1013 06:52:05.961013 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.961097 kubelet[2977]: E1013 06:52:05.961025 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.961771 kubelet[2977]: E1013 06:52:05.961742 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.961771 kubelet[2977]: W1013 06:52:05.961761 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.961771 kubelet[2977]: E1013 06:52:05.961773 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.962181 kubelet[2977]: E1013 06:52:05.962126 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.962181 kubelet[2977]: W1013 06:52:05.962178 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.962271 kubelet[2977]: E1013 06:52:05.962191 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.963545 kubelet[2977]: E1013 06:52:05.963345 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.963545 kubelet[2977]: W1013 06:52:05.963369 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.963545 kubelet[2977]: E1013 06:52:05.963383 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.966514 kubelet[2977]: E1013 06:52:05.966343 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.966851 kubelet[2977]: W1013 06:52:05.966730 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.966851 kubelet[2977]: E1013 06:52:05.966784 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:05.985487 kubelet[2977]: E1013 06:52:05.985450 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:05.985487 kubelet[2977]: W1013 06:52:05.985481 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:05.985652 kubelet[2977]: E1013 06:52:05.985512 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:07.445533 kubelet[2977]: E1013 06:52:07.444666 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:07.698730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039885414.mount: Deactivated successfully. Oct 13 06:52:09.406658 containerd[1656]: time="2025-10-13T06:52:09.406592710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:09.407922 containerd[1656]: time="2025-10-13T06:52:09.407595475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 06:52:09.408306 containerd[1656]: time="2025-10-13T06:52:09.408125845Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:09.409924 containerd[1656]: time="2025-10-13T06:52:09.409853035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:09.412153 containerd[1656]: time="2025-10-13T06:52:09.412111954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.696977994s" Oct 13 06:52:09.412232 containerd[1656]: time="2025-10-13T06:52:09.412171127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 06:52:09.413768 containerd[1656]: time="2025-10-13T06:52:09.413704549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 06:52:09.428663 containerd[1656]: time="2025-10-13T06:52:09.428617056Z" level=info msg="CreateContainer within sandbox \"d2a418d19c5b37dc037a2f49172f59f161bf990c0d9bf876662ebffa8ba984fa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 06:52:09.437169 containerd[1656]: time="2025-10-13T06:52:09.435765309Z" level=info msg="Container fc88752e8aa92e9f96708e39c63dbc2f5db89dc4314bf976f2a5f93334c8e992: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:09.444674 kubelet[2977]: E1013 06:52:09.444595 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:09.480515 containerd[1656]: time="2025-10-13T06:52:09.480467298Z" level=info msg="CreateContainer within sandbox \"d2a418d19c5b37dc037a2f49172f59f161bf990c0d9bf876662ebffa8ba984fa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fc88752e8aa92e9f96708e39c63dbc2f5db89dc4314bf976f2a5f93334c8e992\"" Oct 13 06:52:09.481264 containerd[1656]: time="2025-10-13T06:52:09.481241964Z" level=info msg="StartContainer for \"fc88752e8aa92e9f96708e39c63dbc2f5db89dc4314bf976f2a5f93334c8e992\"" Oct 13 06:52:09.482938 containerd[1656]: time="2025-10-13T06:52:09.482909688Z" level=info msg="connecting to shim fc88752e8aa92e9f96708e39c63dbc2f5db89dc4314bf976f2a5f93334c8e992" address="unix:///run/containerd/s/21854af640ae5d10d03ff8496452b03a21938c3aeab0bf1dd22b1ef34645130c" protocol=ttrpc version=3 Oct 13 06:52:09.519442 systemd[1]: Started cri-containerd-fc88752e8aa92e9f96708e39c63dbc2f5db89dc4314bf976f2a5f93334c8e992.scope - libcontainer container fc88752e8aa92e9f96708e39c63dbc2f5db89dc4314bf976f2a5f93334c8e992. Oct 13 06:52:09.590288 containerd[1656]: time="2025-10-13T06:52:09.590228908Z" level=info msg="StartContainer for \"fc88752e8aa92e9f96708e39c63dbc2f5db89dc4314bf976f2a5f93334c8e992\" returns successfully" Oct 13 06:52:09.644741 kubelet[2977]: I1013 06:52:09.644435 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b98fb549d-k2vmp" podStartSLOduration=0.945045019 podStartE2EDuration="4.644414525s" podCreationTimestamp="2025-10-13 06:52:05 +0000 UTC" firstStartedPulling="2025-10-13 06:52:05.714082506 +0000 UTC m=+20.564464040" lastFinishedPulling="2025-10-13 06:52:09.413452011 +0000 UTC m=+24.263833546" observedRunningTime="2025-10-13 06:52:09.641615517 +0000 UTC m=+24.491997053" watchObservedRunningTime="2025-10-13 06:52:09.644414525 +0000 UTC m=+24.494796085" Oct 13 06:52:09.646726 kubelet[2977]: E1013 06:52:09.646697 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.646837 kubelet[2977]: W1013 06:52:09.646720 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.646837 kubelet[2977]: E1013 06:52:09.646755 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.647206 kubelet[2977]: E1013 06:52:09.647189 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.647985 kubelet[2977]: W1013 06:52:09.647959 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.647985 kubelet[2977]: E1013 06:52:09.647986 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.649012 kubelet[2977]: E1013 06:52:09.648992 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.649098 kubelet[2977]: W1013 06:52:09.649019 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.649098 kubelet[2977]: E1013 06:52:09.649033 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.649461 kubelet[2977]: E1013 06:52:09.649446 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.649461 kubelet[2977]: W1013 06:52:09.649460 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.649529 kubelet[2977]: E1013 06:52:09.649472 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.650549 kubelet[2977]: E1013 06:52:09.650530 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.650549 kubelet[2977]: W1013 06:52:09.650548 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.650662 kubelet[2977]: E1013 06:52:09.650590 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.651412 kubelet[2977]: E1013 06:52:09.651324 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.651412 kubelet[2977]: W1013 06:52:09.651404 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.651511 kubelet[2977]: E1013 06:52:09.651427 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.652330 kubelet[2977]: E1013 06:52:09.652312 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.652330 kubelet[2977]: W1013 06:52:09.652326 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.652445 kubelet[2977]: E1013 06:52:09.652339 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.652652 kubelet[2977]: E1013 06:52:09.652638 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.652652 kubelet[2977]: W1013 06:52:09.652652 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.652733 kubelet[2977]: E1013 06:52:09.652663 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.652944 kubelet[2977]: E1013 06:52:09.652924 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.652944 kubelet[2977]: W1013 06:52:09.652943 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.653018 kubelet[2977]: E1013 06:52:09.652954 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.653185 kubelet[2977]: E1013 06:52:09.653170 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.653241 kubelet[2977]: W1013 06:52:09.653185 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.653241 kubelet[2977]: E1013 06:52:09.653195 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.653431 kubelet[2977]: E1013 06:52:09.653406 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.653431 kubelet[2977]: W1013 06:52:09.653424 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.653500 kubelet[2977]: E1013 06:52:09.653433 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.654065 kubelet[2977]: E1013 06:52:09.653970 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.654237 kubelet[2977]: W1013 06:52:09.654031 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.654237 kubelet[2977]: E1013 06:52:09.654199 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.654633 kubelet[2977]: E1013 06:52:09.654617 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.654633 kubelet[2977]: W1013 06:52:09.654631 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.654714 kubelet[2977]: E1013 06:52:09.654642 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.655389 kubelet[2977]: E1013 06:52:09.655363 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.655389 kubelet[2977]: W1013 06:52:09.655381 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.655675 kubelet[2977]: E1013 06:52:09.655392 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.655728 kubelet[2977]: E1013 06:52:09.655700 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.655728 kubelet[2977]: W1013 06:52:09.655712 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.655728 kubelet[2977]: E1013 06:52:09.655723 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.683509 kubelet[2977]: E1013 06:52:09.683203 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.683509 kubelet[2977]: W1013 06:52:09.683247 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.683509 kubelet[2977]: E1013 06:52:09.683274 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.684340 kubelet[2977]: E1013 06:52:09.683666 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.684340 kubelet[2977]: W1013 06:52:09.683675 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.684340 kubelet[2977]: E1013 06:52:09.683688 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.684340 kubelet[2977]: E1013 06:52:09.683872 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.684340 kubelet[2977]: W1013 06:52:09.683879 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.685217 kubelet[2977]: E1013 06:52:09.684632 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.685217 kubelet[2977]: E1013 06:52:09.684998 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.685217 kubelet[2977]: W1013 06:52:09.685009 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.685217 kubelet[2977]: E1013 06:52:09.685022 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.685902 kubelet[2977]: E1013 06:52:09.685886 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.686431 kubelet[2977]: W1013 06:52:09.686020 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.686431 kubelet[2977]: E1013 06:52:09.686042 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.687162 kubelet[2977]: E1013 06:52:09.687053 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.687162 kubelet[2977]: W1013 06:52:09.687075 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.687162 kubelet[2977]: E1013 06:52:09.687087 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.689189 kubelet[2977]: E1013 06:52:09.689167 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.691174 kubelet[2977]: W1013 06:52:09.689284 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.691174 kubelet[2977]: E1013 06:52:09.689302 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.691413 kubelet[2977]: E1013 06:52:09.691400 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.691481 kubelet[2977]: W1013 06:52:09.691470 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.691550 kubelet[2977]: E1013 06:52:09.691539 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.691850 kubelet[2977]: E1013 06:52:09.691839 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.691921 kubelet[2977]: W1013 06:52:09.691912 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.691974 kubelet[2977]: E1013 06:52:09.691966 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.692252 kubelet[2977]: E1013 06:52:09.692242 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.692328 kubelet[2977]: W1013 06:52:09.692318 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.692935 kubelet[2977]: E1013 06:52:09.692918 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.693570 kubelet[2977]: E1013 06:52:09.693491 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.694181 kubelet[2977]: W1013 06:52:09.694128 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.694272 kubelet[2977]: E1013 06:52:09.694262 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.694693 kubelet[2977]: E1013 06:52:09.694631 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.694872 kubelet[2977]: W1013 06:52:09.694858 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.695466 kubelet[2977]: E1013 06:52:09.695251 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.695822 kubelet[2977]: E1013 06:52:09.695757 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.695822 kubelet[2977]: W1013 06:52:09.695781 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.695822 kubelet[2977]: E1013 06:52:09.695793 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.696425 kubelet[2977]: E1013 06:52:09.696304 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.696425 kubelet[2977]: W1013 06:52:09.696326 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.696425 kubelet[2977]: E1013 06:52:09.696338 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.696853 kubelet[2977]: E1013 06:52:09.696693 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.696853 kubelet[2977]: W1013 06:52:09.696704 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.696853 kubelet[2977]: E1013 06:52:09.696715 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.697459 kubelet[2977]: E1013 06:52:09.697303 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.697459 kubelet[2977]: W1013 06:52:09.697320 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.697459 kubelet[2977]: E1013 06:52:09.697344 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.698879 kubelet[2977]: E1013 06:52:09.698767 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.698879 kubelet[2977]: W1013 06:52:09.698783 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.698879 kubelet[2977]: E1013 06:52:09.698797 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:09.699234 kubelet[2977]: E1013 06:52:09.699162 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:09.699234 kubelet[2977]: W1013 06:52:09.699190 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:09.699234 kubelet[2977]: E1013 06:52:09.699201 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.662378 kubelet[2977]: E1013 06:52:10.662216 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.662378 kubelet[2977]: W1013 06:52:10.662258 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.662378 kubelet[2977]: E1013 06:52:10.662298 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.663758 kubelet[2977]: E1013 06:52:10.662507 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.663758 kubelet[2977]: W1013 06:52:10.662515 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.663758 kubelet[2977]: E1013 06:52:10.662524 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.663758 kubelet[2977]: E1013 06:52:10.662679 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.663758 kubelet[2977]: W1013 06:52:10.662687 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.663758 kubelet[2977]: E1013 06:52:10.662694 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.663758 kubelet[2977]: E1013 06:52:10.662974 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.663758 kubelet[2977]: W1013 06:52:10.662983 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.663758 kubelet[2977]: E1013 06:52:10.662993 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.663758 kubelet[2977]: E1013 06:52:10.663404 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.665279 kubelet[2977]: W1013 06:52:10.663415 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.665279 kubelet[2977]: E1013 06:52:10.663426 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.665279 kubelet[2977]: E1013 06:52:10.663981 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.665279 kubelet[2977]: W1013 06:52:10.663993 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.665279 kubelet[2977]: E1013 06:52:10.664005 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.665279 kubelet[2977]: E1013 06:52:10.664492 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.665279 kubelet[2977]: W1013 06:52:10.664502 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.665279 kubelet[2977]: E1013 06:52:10.664513 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.665279 kubelet[2977]: E1013 06:52:10.664777 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.665279 kubelet[2977]: W1013 06:52:10.664785 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.665573 kubelet[2977]: E1013 06:52:10.664794 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.665573 kubelet[2977]: E1013 06:52:10.665105 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.665573 kubelet[2977]: W1013 06:52:10.665132 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.665573 kubelet[2977]: E1013 06:52:10.665234 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.665573 kubelet[2977]: E1013 06:52:10.665472 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.665573 kubelet[2977]: W1013 06:52:10.665481 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.665573 kubelet[2977]: E1013 06:52:10.665490 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.666917 kubelet[2977]: E1013 06:52:10.665787 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.666917 kubelet[2977]: W1013 06:52:10.665795 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.666917 kubelet[2977]: E1013 06:52:10.665804 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.666917 kubelet[2977]: E1013 06:52:10.666058 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.666917 kubelet[2977]: W1013 06:52:10.666066 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.666917 kubelet[2977]: E1013 06:52:10.666075 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.666917 kubelet[2977]: E1013 06:52:10.666397 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.666917 kubelet[2977]: W1013 06:52:10.666405 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.666917 kubelet[2977]: E1013 06:52:10.666434 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.666917 kubelet[2977]: E1013 06:52:10.666772 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.668576 kubelet[2977]: W1013 06:52:10.666780 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.668576 kubelet[2977]: E1013 06:52:10.666789 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.668576 kubelet[2977]: E1013 06:52:10.667087 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.668576 kubelet[2977]: W1013 06:52:10.667095 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.668576 kubelet[2977]: E1013 06:52:10.667104 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.694637 kubelet[2977]: E1013 06:52:10.694599 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.694959 kubelet[2977]: W1013 06:52:10.694817 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.694959 kubelet[2977]: E1013 06:52:10.694853 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.695397 kubelet[2977]: E1013 06:52:10.695306 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.695397 kubelet[2977]: W1013 06:52:10.695322 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.695397 kubelet[2977]: E1013 06:52:10.695336 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.695720 kubelet[2977]: E1013 06:52:10.695695 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.695774 kubelet[2977]: W1013 06:52:10.695719 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.695774 kubelet[2977]: E1013 06:52:10.695737 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.695959 kubelet[2977]: E1013 06:52:10.695946 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.696004 kubelet[2977]: W1013 06:52:10.695960 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.696004 kubelet[2977]: E1013 06:52:10.695971 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.696198 kubelet[2977]: E1013 06:52:10.696185 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.696240 kubelet[2977]: W1013 06:52:10.696197 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.696240 kubelet[2977]: E1013 06:52:10.696208 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.696492 kubelet[2977]: E1013 06:52:10.696479 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.696542 kubelet[2977]: W1013 06:52:10.696492 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.696542 kubelet[2977]: E1013 06:52:10.696503 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.696883 kubelet[2977]: E1013 06:52:10.696857 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.696883 kubelet[2977]: W1013 06:52:10.696878 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.696989 kubelet[2977]: E1013 06:52:10.696898 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.697226 kubelet[2977]: E1013 06:52:10.697176 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.697226 kubelet[2977]: W1013 06:52:10.697190 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.697226 kubelet[2977]: E1013 06:52:10.697201 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.697427 kubelet[2977]: E1013 06:52:10.697415 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.697469 kubelet[2977]: W1013 06:52:10.697438 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.697469 kubelet[2977]: E1013 06:52:10.697449 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.697661 kubelet[2977]: E1013 06:52:10.697648 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.697706 kubelet[2977]: W1013 06:52:10.697661 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.697706 kubelet[2977]: E1013 06:52:10.697670 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.697915 kubelet[2977]: E1013 06:52:10.697902 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.697915 kubelet[2977]: W1013 06:52:10.697915 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.698019 kubelet[2977]: E1013 06:52:10.697925 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.698131 kubelet[2977]: E1013 06:52:10.698118 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.698193 kubelet[2977]: W1013 06:52:10.698130 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.698193 kubelet[2977]: E1013 06:52:10.698175 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.698417 kubelet[2977]: E1013 06:52:10.698405 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.698417 kubelet[2977]: W1013 06:52:10.698417 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.698505 kubelet[2977]: E1013 06:52:10.698427 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.698692 kubelet[2977]: E1013 06:52:10.698678 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.698692 kubelet[2977]: W1013 06:52:10.698691 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.698776 kubelet[2977]: E1013 06:52:10.698708 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.699336 kubelet[2977]: E1013 06:52:10.699305 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.699386 kubelet[2977]: W1013 06:52:10.699344 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.699386 kubelet[2977]: E1013 06:52:10.699358 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.700621 kubelet[2977]: E1013 06:52:10.700588 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.700621 kubelet[2977]: W1013 06:52:10.700612 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.700741 kubelet[2977]: E1013 06:52:10.700627 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.702164 kubelet[2977]: E1013 06:52:10.702125 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.702385 kubelet[2977]: W1013 06:52:10.702231 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.702385 kubelet[2977]: E1013 06:52:10.702248 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:10.702842 kubelet[2977]: E1013 06:52:10.702820 2977 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 06:52:10.702842 kubelet[2977]: W1013 06:52:10.702837 2977 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 06:52:10.702912 kubelet[2977]: E1013 06:52:10.702856 2977 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 06:52:11.225762 containerd[1656]: time="2025-10-13T06:52:11.225713870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:11.227326 containerd[1656]: time="2025-10-13T06:52:11.227294114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 06:52:11.228188 containerd[1656]: time="2025-10-13T06:52:11.228081045Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:11.230768 containerd[1656]: time="2025-10-13T06:52:11.230197088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:11.231214 containerd[1656]: time="2025-10-13T06:52:11.230755569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.817023454s" Oct 13 06:52:11.231317 containerd[1656]: time="2025-10-13T06:52:11.231302938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 06:52:11.238601 containerd[1656]: time="2025-10-13T06:52:11.238558225Z" level=info msg="CreateContainer within sandbox \"e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 06:52:11.249386 containerd[1656]: time="2025-10-13T06:52:11.249323422Z" level=info msg="Container b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:11.261710 containerd[1656]: time="2025-10-13T06:52:11.261323328Z" level=info msg="CreateContainer within sandbox \"e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84\"" Oct 13 06:52:11.263177 containerd[1656]: time="2025-10-13T06:52:11.263090370Z" level=info msg="StartContainer for \"b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84\"" Oct 13 06:52:11.267988 containerd[1656]: time="2025-10-13T06:52:11.267914257Z" level=info msg="connecting to shim b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84" address="unix:///run/containerd/s/d5d0b97b7e221bf77ac6e46a9e51eb03e7d94c8b4f008ec8fe3d098f7481769b" protocol=ttrpc version=3 Oct 13 06:52:11.295301 systemd[1]: Started cri-containerd-b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84.scope - libcontainer container b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84. Oct 13 06:52:11.361630 containerd[1656]: time="2025-10-13T06:52:11.361484491Z" level=info msg="StartContainer for \"b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84\" returns successfully" Oct 13 06:52:11.386033 systemd[1]: cri-containerd-b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84.scope: Deactivated successfully. Oct 13 06:52:11.413929 containerd[1656]: time="2025-10-13T06:52:11.413862786Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84\" id:\"b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84\" pid:3690 exited_at:{seconds:1760338331 nanos:389099831}" Oct 13 06:52:11.421876 containerd[1656]: time="2025-10-13T06:52:11.421810430Z" level=info msg="received exit event container_id:\"b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84\" id:\"b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84\" pid:3690 exited_at:{seconds:1760338331 nanos:389099831}" Oct 13 06:52:11.446822 kubelet[2977]: E1013 06:52:11.443624 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:11.475187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3d0743c10b0ac0fd5228d67618a42197b56b29bdee0a5554de28fa8356a1a84-rootfs.mount: Deactivated successfully. Oct 13 06:52:11.635054 containerd[1656]: time="2025-10-13T06:52:11.634999493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 06:52:13.446826 kubelet[2977]: E1013 06:52:13.446700 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:15.448905 kubelet[2977]: E1013 06:52:15.448835 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:17.450297 kubelet[2977]: E1013 06:52:17.450251 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:17.917255 containerd[1656]: time="2025-10-13T06:52:17.917211343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:17.918084 containerd[1656]: time="2025-10-13T06:52:17.918053548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 06:52:17.918560 containerd[1656]: time="2025-10-13T06:52:17.918537418Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:17.920419 containerd[1656]: time="2025-10-13T06:52:17.920393261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:17.921108 containerd[1656]: time="2025-10-13T06:52:17.921084650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.286030405s" Oct 13 06:52:17.921176 containerd[1656]: time="2025-10-13T06:52:17.921115289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 06:52:17.925295 containerd[1656]: time="2025-10-13T06:52:17.925270031Z" level=info msg="CreateContainer within sandbox \"e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 06:52:17.955977 containerd[1656]: time="2025-10-13T06:52:17.955718894Z" level=info msg="Container af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:17.985587 containerd[1656]: time="2025-10-13T06:52:17.985528323Z" level=info msg="CreateContainer within sandbox \"e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d\"" Oct 13 06:52:17.987400 containerd[1656]: time="2025-10-13T06:52:17.987361168Z" level=info msg="StartContainer for \"af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d\"" Oct 13 06:52:17.991367 containerd[1656]: time="2025-10-13T06:52:17.991291141Z" level=info msg="connecting to shim af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d" address="unix:///run/containerd/s/d5d0b97b7e221bf77ac6e46a9e51eb03e7d94c8b4f008ec8fe3d098f7481769b" protocol=ttrpc version=3 Oct 13 06:52:18.020355 systemd[1]: Started cri-containerd-af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d.scope - libcontainer container af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d. Oct 13 06:52:18.074115 containerd[1656]: time="2025-10-13T06:52:18.074004873Z" level=info msg="StartContainer for \"af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d\" returns successfully" Oct 13 06:52:18.816210 systemd[1]: cri-containerd-af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d.scope: Deactivated successfully. Oct 13 06:52:18.817089 systemd[1]: cri-containerd-af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d.scope: Consumed 676ms CPU time, 163M memory peak, 8.3M read from disk, 171.3M written to disk. Oct 13 06:52:18.834250 containerd[1656]: time="2025-10-13T06:52:18.834202100Z" level=info msg="received exit event container_id:\"af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d\" id:\"af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d\" pid:3748 exited_at:{seconds:1760338338 nanos:833751676}" Oct 13 06:52:18.834766 containerd[1656]: time="2025-10-13T06:52:18.834746256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d\" id:\"af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d\" pid:3748 exited_at:{seconds:1760338338 nanos:833751676}" Oct 13 06:52:18.910332 kubelet[2977]: I1013 06:52:18.910300 2977 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 13 06:52:18.930860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af3487bb4166ca9139cb852993d76bb8707d477ec1a5067df2f6a5625431a91d-rootfs.mount: Deactivated successfully. Oct 13 06:52:19.000963 systemd[1]: Created slice kubepods-besteffort-podf857710e_a3a5_4070_a288_09c149e7c12b.slice - libcontainer container kubepods-besteffort-podf857710e_a3a5_4070_a288_09c149e7c12b.slice. Oct 13 06:52:19.052717 systemd[1]: Created slice kubepods-burstable-pod3eba3a7e_78fe_4123_ad80_ff9b190e6fbb.slice - libcontainer container kubepods-burstable-pod3eba3a7e_78fe_4123_ad80_ff9b190e6fbb.slice. Oct 13 06:52:19.068734 kubelet[2977]: I1013 06:52:19.067015 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3eba3a7e-78fe-4123-ad80-ff9b190e6fbb-config-volume\") pod \"coredns-674b8bbfcf-pb7zf\" (UID: \"3eba3a7e-78fe-4123-ad80-ff9b190e6fbb\") " pod="kube-system/coredns-674b8bbfcf-pb7zf" Oct 13 06:52:19.068734 kubelet[2977]: I1013 06:52:19.067060 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f857710e-a3a5-4070-a288-09c149e7c12b-tigera-ca-bundle\") pod \"calico-kube-controllers-649445cb55-nkpkd\" (UID: \"f857710e-a3a5-4070-a288-09c149e7c12b\") " pod="calico-system/calico-kube-controllers-649445cb55-nkpkd" Oct 13 06:52:19.068734 kubelet[2977]: I1013 06:52:19.067083 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvbdj\" (UniqueName: \"kubernetes.io/projected/13ea8fb3-c2c1-4c48-a5c9-318a743690ae-kube-api-access-fvbdj\") pod \"coredns-674b8bbfcf-rx6ng\" (UID: \"13ea8fb3-c2c1-4c48-a5c9-318a743690ae\") " pod="kube-system/coredns-674b8bbfcf-rx6ng" Oct 13 06:52:19.068734 kubelet[2977]: I1013 06:52:19.067105 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4kp\" (UniqueName: \"kubernetes.io/projected/f857710e-a3a5-4070-a288-09c149e7c12b-kube-api-access-sr4kp\") pod \"calico-kube-controllers-649445cb55-nkpkd\" (UID: \"f857710e-a3a5-4070-a288-09c149e7c12b\") " pod="calico-system/calico-kube-controllers-649445cb55-nkpkd" Oct 13 06:52:19.068734 kubelet[2977]: I1013 06:52:19.067125 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6xxg\" (UniqueName: \"kubernetes.io/projected/61a0959f-cd39-468e-89f5-b6905a773190-kube-api-access-v6xxg\") pod \"calico-apiserver-589b9ff459-dqwm5\" (UID: \"61a0959f-cd39-468e-89f5-b6905a773190\") " pod="calico-apiserver/calico-apiserver-589b9ff459-dqwm5" Oct 13 06:52:19.068417 systemd[1]: Created slice kubepods-besteffort-pod61a0959f_cd39_468e_89f5_b6905a773190.slice - libcontainer container kubepods-besteffort-pod61a0959f_cd39_468e_89f5_b6905a773190.slice. Oct 13 06:52:19.069376 kubelet[2977]: I1013 06:52:19.069352 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/126b56cc-c792-434a-81d4-98cb03e7ccc5-config\") pod \"goldmane-54d579b49d-nz6q2\" (UID: \"126b56cc-c792-434a-81d4-98cb03e7ccc5\") " pod="calico-system/goldmane-54d579b49d-nz6q2" Oct 13 06:52:19.069426 kubelet[2977]: I1013 06:52:19.069390 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt6z5\" (UniqueName: \"kubernetes.io/projected/3eba3a7e-78fe-4123-ad80-ff9b190e6fbb-kube-api-access-bt6z5\") pod \"coredns-674b8bbfcf-pb7zf\" (UID: \"3eba3a7e-78fe-4123-ad80-ff9b190e6fbb\") " pod="kube-system/coredns-674b8bbfcf-pb7zf" Oct 13 06:52:19.069426 kubelet[2977]: I1013 06:52:19.069411 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be657f77-eca5-487a-b019-9e9370c90449-whisker-backend-key-pair\") pod \"whisker-7b9db7c8df-gfc5v\" (UID: \"be657f77-eca5-487a-b019-9e9370c90449\") " pod="calico-system/whisker-7b9db7c8df-gfc5v" Oct 13 06:52:19.069834 kubelet[2977]: I1013 06:52:19.069434 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/126b56cc-c792-434a-81d4-98cb03e7ccc5-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-nz6q2\" (UID: \"126b56cc-c792-434a-81d4-98cb03e7ccc5\") " pod="calico-system/goldmane-54d579b49d-nz6q2" Oct 13 06:52:19.069834 kubelet[2977]: I1013 06:52:19.069451 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be657f77-eca5-487a-b019-9e9370c90449-whisker-ca-bundle\") pod \"whisker-7b9db7c8df-gfc5v\" (UID: \"be657f77-eca5-487a-b019-9e9370c90449\") " pod="calico-system/whisker-7b9db7c8df-gfc5v" Oct 13 06:52:19.069834 kubelet[2977]: I1013 06:52:19.069470 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7r6g\" (UniqueName: \"kubernetes.io/projected/be657f77-eca5-487a-b019-9e9370c90449-kube-api-access-v7r6g\") pod \"whisker-7b9db7c8df-gfc5v\" (UID: \"be657f77-eca5-487a-b019-9e9370c90449\") " pod="calico-system/whisker-7b9db7c8df-gfc5v" Oct 13 06:52:19.069834 kubelet[2977]: I1013 06:52:19.069492 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjfmj\" (UniqueName: \"kubernetes.io/projected/126b56cc-c792-434a-81d4-98cb03e7ccc5-kube-api-access-gjfmj\") pod \"goldmane-54d579b49d-nz6q2\" (UID: \"126b56cc-c792-434a-81d4-98cb03e7ccc5\") " pod="calico-system/goldmane-54d579b49d-nz6q2" Oct 13 06:52:19.069834 kubelet[2977]: I1013 06:52:19.069511 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/126b56cc-c792-434a-81d4-98cb03e7ccc5-goldmane-key-pair\") pod \"goldmane-54d579b49d-nz6q2\" (UID: \"126b56cc-c792-434a-81d4-98cb03e7ccc5\") " pod="calico-system/goldmane-54d579b49d-nz6q2" Oct 13 06:52:19.069983 kubelet[2977]: I1013 06:52:19.069529 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13ea8fb3-c2c1-4c48-a5c9-318a743690ae-config-volume\") pod \"coredns-674b8bbfcf-rx6ng\" (UID: \"13ea8fb3-c2c1-4c48-a5c9-318a743690ae\") " pod="kube-system/coredns-674b8bbfcf-rx6ng" Oct 13 06:52:19.069983 kubelet[2977]: I1013 06:52:19.069550 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd591744-760e-42c9-87b8-ff71e15cfd43-calico-apiserver-certs\") pod \"calico-apiserver-589b9ff459-6r9zx\" (UID: \"cd591744-760e-42c9-87b8-ff71e15cfd43\") " pod="calico-apiserver/calico-apiserver-589b9ff459-6r9zx" Oct 13 06:52:19.069983 kubelet[2977]: I1013 06:52:19.069567 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61a0959f-cd39-468e-89f5-b6905a773190-calico-apiserver-certs\") pod \"calico-apiserver-589b9ff459-dqwm5\" (UID: \"61a0959f-cd39-468e-89f5-b6905a773190\") " pod="calico-apiserver/calico-apiserver-589b9ff459-dqwm5" Oct 13 06:52:19.069983 kubelet[2977]: I1013 06:52:19.069589 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lvn\" (UniqueName: \"kubernetes.io/projected/cd591744-760e-42c9-87b8-ff71e15cfd43-kube-api-access-28lvn\") pod \"calico-apiserver-589b9ff459-6r9zx\" (UID: \"cd591744-760e-42c9-87b8-ff71e15cfd43\") " pod="calico-apiserver/calico-apiserver-589b9ff459-6r9zx" Oct 13 06:52:19.076750 systemd[1]: Created slice kubepods-besteffort-podbe657f77_eca5_487a_b019_9e9370c90449.slice - libcontainer container kubepods-besteffort-podbe657f77_eca5_487a_b019_9e9370c90449.slice. Oct 13 06:52:19.083009 systemd[1]: Created slice kubepods-besteffort-podcd591744_760e_42c9_87b8_ff71e15cfd43.slice - libcontainer container kubepods-besteffort-podcd591744_760e_42c9_87b8_ff71e15cfd43.slice. Oct 13 06:52:19.093090 systemd[1]: Created slice kubepods-burstable-pod13ea8fb3_c2c1_4c48_a5c9_318a743690ae.slice - libcontainer container kubepods-burstable-pod13ea8fb3_c2c1_4c48_a5c9_318a743690ae.slice. Oct 13 06:52:19.101241 systemd[1]: Created slice kubepods-besteffort-pod126b56cc_c792_434a_81d4_98cb03e7ccc5.slice - libcontainer container kubepods-besteffort-pod126b56cc_c792_434a_81d4_98cb03e7ccc5.slice. Oct 13 06:52:19.350061 containerd[1656]: time="2025-10-13T06:52:19.348592515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649445cb55-nkpkd,Uid:f857710e-a3a5-4070-a288-09c149e7c12b,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:19.364191 containerd[1656]: time="2025-10-13T06:52:19.363938844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pb7zf,Uid:3eba3a7e-78fe-4123-ad80-ff9b190e6fbb,Namespace:kube-system,Attempt:0,}" Oct 13 06:52:19.375824 containerd[1656]: time="2025-10-13T06:52:19.375796262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-dqwm5,Uid:61a0959f-cd39-468e-89f5-b6905a773190,Namespace:calico-apiserver,Attempt:0,}" Oct 13 06:52:19.399175 containerd[1656]: time="2025-10-13T06:52:19.398522074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-6r9zx,Uid:cd591744-760e-42c9-87b8-ff71e15cfd43,Namespace:calico-apiserver,Attempt:0,}" Oct 13 06:52:19.400585 containerd[1656]: time="2025-10-13T06:52:19.399069479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9db7c8df-gfc5v,Uid:be657f77-eca5-487a-b019-9e9370c90449,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:19.400865 containerd[1656]: time="2025-10-13T06:52:19.400843509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx6ng,Uid:13ea8fb3-c2c1-4c48-a5c9-318a743690ae,Namespace:kube-system,Attempt:0,}" Oct 13 06:52:19.407118 containerd[1656]: time="2025-10-13T06:52:19.406624001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nz6q2,Uid:126b56cc-c792-434a-81d4-98cb03e7ccc5,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:19.457279 systemd[1]: Created slice kubepods-besteffort-pod917edfcb_100c_4720_b0de_c02da5d69423.slice - libcontainer container kubepods-besteffort-pod917edfcb_100c_4720_b0de_c02da5d69423.slice. Oct 13 06:52:19.466024 containerd[1656]: time="2025-10-13T06:52:19.465988453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vjdvq,Uid:917edfcb-100c-4720-b0de-c02da5d69423,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:19.723187 containerd[1656]: time="2025-10-13T06:52:19.721899572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 06:52:19.738398 containerd[1656]: time="2025-10-13T06:52:19.738120703Z" level=error msg="Failed to destroy network for sandbox \"2937d71e1e256d212bfe784429b3951c064056b10691d88204df668a1f492e4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.746173 containerd[1656]: time="2025-10-13T06:52:19.745867116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pb7zf,Uid:3eba3a7e-78fe-4123-ad80-ff9b190e6fbb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2937d71e1e256d212bfe784429b3951c064056b10691d88204df668a1f492e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.751544 kubelet[2977]: E1013 06:52:19.751092 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2937d71e1e256d212bfe784429b3951c064056b10691d88204df668a1f492e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.751544 kubelet[2977]: E1013 06:52:19.751201 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2937d71e1e256d212bfe784429b3951c064056b10691d88204df668a1f492e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pb7zf" Oct 13 06:52:19.751544 kubelet[2977]: E1013 06:52:19.751237 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2937d71e1e256d212bfe784429b3951c064056b10691d88204df668a1f492e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pb7zf" Oct 13 06:52:19.752219 kubelet[2977]: E1013 06:52:19.751294 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pb7zf_kube-system(3eba3a7e-78fe-4123-ad80-ff9b190e6fbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pb7zf_kube-system(3eba3a7e-78fe-4123-ad80-ff9b190e6fbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2937d71e1e256d212bfe784429b3951c064056b10691d88204df668a1f492e4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pb7zf" podUID="3eba3a7e-78fe-4123-ad80-ff9b190e6fbb" Oct 13 06:52:19.800676 containerd[1656]: time="2025-10-13T06:52:19.800411094Z" level=error msg="Failed to destroy network for sandbox \"4337020796c06b0a1bc3b0bd7d44d41bb248260ab448702a7b8c5bb0fb39abd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.803096 containerd[1656]: time="2025-10-13T06:52:19.802851264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649445cb55-nkpkd,Uid:f857710e-a3a5-4070-a288-09c149e7c12b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4337020796c06b0a1bc3b0bd7d44d41bb248260ab448702a7b8c5bb0fb39abd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.804579 kubelet[2977]: E1013 06:52:19.804067 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4337020796c06b0a1bc3b0bd7d44d41bb248260ab448702a7b8c5bb0fb39abd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.804579 kubelet[2977]: E1013 06:52:19.804153 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4337020796c06b0a1bc3b0bd7d44d41bb248260ab448702a7b8c5bb0fb39abd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649445cb55-nkpkd" Oct 13 06:52:19.804579 kubelet[2977]: E1013 06:52:19.804185 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4337020796c06b0a1bc3b0bd7d44d41bb248260ab448702a7b8c5bb0fb39abd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-649445cb55-nkpkd" Oct 13 06:52:19.804813 kubelet[2977]: E1013 06:52:19.804254 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-649445cb55-nkpkd_calico-system(f857710e-a3a5-4070-a288-09c149e7c12b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-649445cb55-nkpkd_calico-system(f857710e-a3a5-4070-a288-09c149e7c12b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4337020796c06b0a1bc3b0bd7d44d41bb248260ab448702a7b8c5bb0fb39abd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-649445cb55-nkpkd" podUID="f857710e-a3a5-4070-a288-09c149e7c12b" Oct 13 06:52:19.846488 containerd[1656]: time="2025-10-13T06:52:19.846442658Z" level=error msg="Failed to destroy network for sandbox \"f1bfde20773e6e9745aa2a6b9615fa761c35e698144f15fbbd2fb040c454942f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.848154 containerd[1656]: time="2025-10-13T06:52:19.848097370Z" level=error msg="Failed to destroy network for sandbox \"2d24b30a85740cc568b9925e0fbdedb15fe31cd14316b91d109ffbfabd5f4562\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.848602 containerd[1656]: time="2025-10-13T06:52:19.848570304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nz6q2,Uid:126b56cc-c792-434a-81d4-98cb03e7ccc5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bfde20773e6e9745aa2a6b9615fa761c35e698144f15fbbd2fb040c454942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.848879 kubelet[2977]: E1013 06:52:19.848808 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bfde20773e6e9745aa2a6b9615fa761c35e698144f15fbbd2fb040c454942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.848968 kubelet[2977]: E1013 06:52:19.848875 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bfde20773e6e9745aa2a6b9615fa761c35e698144f15fbbd2fb040c454942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-nz6q2" Oct 13 06:52:19.848968 kubelet[2977]: E1013 06:52:19.848899 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1bfde20773e6e9745aa2a6b9615fa761c35e698144f15fbbd2fb040c454942f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-nz6q2" Oct 13 06:52:19.849101 kubelet[2977]: E1013 06:52:19.848967 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-nz6q2_calico-system(126b56cc-c792-434a-81d4-98cb03e7ccc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-nz6q2_calico-system(126b56cc-c792-434a-81d4-98cb03e7ccc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1bfde20773e6e9745aa2a6b9615fa761c35e698144f15fbbd2fb040c454942f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-nz6q2" podUID="126b56cc-c792-434a-81d4-98cb03e7ccc5" Oct 13 06:52:19.849952 containerd[1656]: time="2025-10-13T06:52:19.849887081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-dqwm5,Uid:61a0959f-cd39-468e-89f5-b6905a773190,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d24b30a85740cc568b9925e0fbdedb15fe31cd14316b91d109ffbfabd5f4562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.850657 kubelet[2977]: E1013 06:52:19.850449 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d24b30a85740cc568b9925e0fbdedb15fe31cd14316b91d109ffbfabd5f4562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.850657 kubelet[2977]: E1013 06:52:19.850513 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d24b30a85740cc568b9925e0fbdedb15fe31cd14316b91d109ffbfabd5f4562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b9ff459-dqwm5" Oct 13 06:52:19.850657 kubelet[2977]: E1013 06:52:19.850535 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d24b30a85740cc568b9925e0fbdedb15fe31cd14316b91d109ffbfabd5f4562\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b9ff459-dqwm5" Oct 13 06:52:19.850853 kubelet[2977]: E1013 06:52:19.850600 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b9ff459-dqwm5_calico-apiserver(61a0959f-cd39-468e-89f5-b6905a773190)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b9ff459-dqwm5_calico-apiserver(61a0959f-cd39-468e-89f5-b6905a773190)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d24b30a85740cc568b9925e0fbdedb15fe31cd14316b91d109ffbfabd5f4562\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b9ff459-dqwm5" podUID="61a0959f-cd39-468e-89f5-b6905a773190" Oct 13 06:52:19.859290 containerd[1656]: time="2025-10-13T06:52:19.859249453Z" level=error msg="Failed to destroy network for sandbox \"d061fceca791ea84d8d8b5ed962dd62fb0ac04ab9dda4783414cb3f8baa58d8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.862803 containerd[1656]: time="2025-10-13T06:52:19.862772048Z" level=error msg="Failed to destroy network for sandbox \"07a0da652b0644b038e4bac0dcc7e948ed51a6a95eaca01e605ee28a03030865\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.862904 containerd[1656]: time="2025-10-13T06:52:19.862880771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx6ng,Uid:13ea8fb3-c2c1-4c48-a5c9-318a743690ae,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d061fceca791ea84d8d8b5ed962dd62fb0ac04ab9dda4783414cb3f8baa58d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.863104 kubelet[2977]: E1013 06:52:19.863061 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d061fceca791ea84d8d8b5ed962dd62fb0ac04ab9dda4783414cb3f8baa58d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.863289 kubelet[2977]: E1013 06:52:19.863115 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d061fceca791ea84d8d8b5ed962dd62fb0ac04ab9dda4783414cb3f8baa58d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rx6ng" Oct 13 06:52:19.863289 kubelet[2977]: E1013 06:52:19.863235 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d061fceca791ea84d8d8b5ed962dd62fb0ac04ab9dda4783414cb3f8baa58d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rx6ng" Oct 13 06:52:19.863500 kubelet[2977]: E1013 06:52:19.863290 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rx6ng_kube-system(13ea8fb3-c2c1-4c48-a5c9-318a743690ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rx6ng_kube-system(13ea8fb3-c2c1-4c48-a5c9-318a743690ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d061fceca791ea84d8d8b5ed962dd62fb0ac04ab9dda4783414cb3f8baa58d8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rx6ng" podUID="13ea8fb3-c2c1-4c48-a5c9-318a743690ae" Oct 13 06:52:19.865416 containerd[1656]: time="2025-10-13T06:52:19.865378508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9db7c8df-gfc5v,Uid:be657f77-eca5-487a-b019-9e9370c90449,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a0da652b0644b038e4bac0dcc7e948ed51a6a95eaca01e605ee28a03030865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.865798 kubelet[2977]: E1013 06:52:19.865551 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a0da652b0644b038e4bac0dcc7e948ed51a6a95eaca01e605ee28a03030865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.865798 kubelet[2977]: E1013 06:52:19.865619 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a0da652b0644b038e4bac0dcc7e948ed51a6a95eaca01e605ee28a03030865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9db7c8df-gfc5v" Oct 13 06:52:19.865798 kubelet[2977]: E1013 06:52:19.865681 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07a0da652b0644b038e4bac0dcc7e948ed51a6a95eaca01e605ee28a03030865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b9db7c8df-gfc5v" Oct 13 06:52:19.866275 kubelet[2977]: E1013 06:52:19.866209 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b9db7c8df-gfc5v_calico-system(be657f77-eca5-487a-b019-9e9370c90449)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b9db7c8df-gfc5v_calico-system(be657f77-eca5-487a-b019-9e9370c90449)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07a0da652b0644b038e4bac0dcc7e948ed51a6a95eaca01e605ee28a03030865\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b9db7c8df-gfc5v" podUID="be657f77-eca5-487a-b019-9e9370c90449" Oct 13 06:52:19.869375 containerd[1656]: time="2025-10-13T06:52:19.869332290Z" level=error msg="Failed to destroy network for sandbox \"740cb21d8d18a1d6d93524fd6be34b190a2dccbac33a0a04299f208531674338\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.871527 containerd[1656]: time="2025-10-13T06:52:19.871451877Z" level=error msg="Failed to destroy network for sandbox \"c4af8485d7d8d3b406df5850495f37b153333d65e91e297beb1064209abbb09f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.872292 containerd[1656]: time="2025-10-13T06:52:19.872222946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vjdvq,Uid:917edfcb-100c-4720-b0de-c02da5d69423,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"740cb21d8d18a1d6d93524fd6be34b190a2dccbac33a0a04299f208531674338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.872653 kubelet[2977]: E1013 06:52:19.872593 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740cb21d8d18a1d6d93524fd6be34b190a2dccbac33a0a04299f208531674338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.872748 kubelet[2977]: E1013 06:52:19.872705 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740cb21d8d18a1d6d93524fd6be34b190a2dccbac33a0a04299f208531674338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vjdvq" Oct 13 06:52:19.872808 kubelet[2977]: E1013 06:52:19.872778 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740cb21d8d18a1d6d93524fd6be34b190a2dccbac33a0a04299f208531674338\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vjdvq" Oct 13 06:52:19.873163 kubelet[2977]: E1013 06:52:19.872918 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vjdvq_calico-system(917edfcb-100c-4720-b0de-c02da5d69423)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vjdvq_calico-system(917edfcb-100c-4720-b0de-c02da5d69423)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"740cb21d8d18a1d6d93524fd6be34b190a2dccbac33a0a04299f208531674338\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vjdvq" podUID="917edfcb-100c-4720-b0de-c02da5d69423" Oct 13 06:52:19.875462 containerd[1656]: time="2025-10-13T06:52:19.875393948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-6r9zx,Uid:cd591744-760e-42c9-87b8-ff71e15cfd43,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4af8485d7d8d3b406df5850495f37b153333d65e91e297beb1064209abbb09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.875784 kubelet[2977]: E1013 06:52:19.875723 2977 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4af8485d7d8d3b406df5850495f37b153333d65e91e297beb1064209abbb09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 06:52:19.875911 kubelet[2977]: E1013 06:52:19.875774 2977 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4af8485d7d8d3b406df5850495f37b153333d65e91e297beb1064209abbb09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b9ff459-6r9zx" Oct 13 06:52:19.875911 kubelet[2977]: E1013 06:52:19.875882 2977 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4af8485d7d8d3b406df5850495f37b153333d65e91e297beb1064209abbb09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-589b9ff459-6r9zx" Oct 13 06:52:19.876313 kubelet[2977]: E1013 06:52:19.876084 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-589b9ff459-6r9zx_calico-apiserver(cd591744-760e-42c9-87b8-ff71e15cfd43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-589b9ff459-6r9zx_calico-apiserver(cd591744-760e-42c9-87b8-ff71e15cfd43)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4af8485d7d8d3b406df5850495f37b153333d65e91e297beb1064209abbb09f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-589b9ff459-6r9zx" podUID="cd591744-760e-42c9-87b8-ff71e15cfd43" Oct 13 06:52:27.419654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018019242.mount: Deactivated successfully. Oct 13 06:52:27.542437 containerd[1656]: time="2025-10-13T06:52:27.542256168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:27.548180 containerd[1656]: time="2025-10-13T06:52:27.527280932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 06:52:27.559631 containerd[1656]: time="2025-10-13T06:52:27.559525726Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:27.560234 containerd[1656]: time="2025-10-13T06:52:27.560202725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:27.563040 containerd[1656]: time="2025-10-13T06:52:27.563003655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 7.838697895s" Oct 13 06:52:27.563099 containerd[1656]: time="2025-10-13T06:52:27.563045010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 06:52:27.613325 containerd[1656]: time="2025-10-13T06:52:27.613274828Z" level=info msg="CreateContainer within sandbox \"e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 06:52:27.691316 containerd[1656]: time="2025-10-13T06:52:27.687420754Z" level=info msg="Container 6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:27.698388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424604489.mount: Deactivated successfully. Oct 13 06:52:27.706371 containerd[1656]: time="2025-10-13T06:52:27.706333617Z" level=info msg="CreateContainer within sandbox \"e71d1b5a323911a306b7332eba51951154f17ec2a29067e888a5c9060851668f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\"" Oct 13 06:52:27.707094 containerd[1656]: time="2025-10-13T06:52:27.706925357Z" level=info msg="StartContainer for \"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\"" Oct 13 06:52:27.722422 containerd[1656]: time="2025-10-13T06:52:27.722318036Z" level=info msg="connecting to shim 6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964" address="unix:///run/containerd/s/d5d0b97b7e221bf77ac6e46a9e51eb03e7d94c8b4f008ec8fe3d098f7481769b" protocol=ttrpc version=3 Oct 13 06:52:27.868287 systemd[1]: Started cri-containerd-6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964.scope - libcontainer container 6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964. Oct 13 06:52:27.944192 containerd[1656]: time="2025-10-13T06:52:27.941650018Z" level=info msg="StartContainer for \"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\" returns successfully" Oct 13 06:52:28.064169 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 06:52:28.072227 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 06:52:28.348298 kubelet[2977]: I1013 06:52:28.347891 2977 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7r6g\" (UniqueName: \"kubernetes.io/projected/be657f77-eca5-487a-b019-9e9370c90449-kube-api-access-v7r6g\") pod \"be657f77-eca5-487a-b019-9e9370c90449\" (UID: \"be657f77-eca5-487a-b019-9e9370c90449\") " Oct 13 06:52:28.348298 kubelet[2977]: I1013 06:52:28.348016 2977 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be657f77-eca5-487a-b019-9e9370c90449-whisker-ca-bundle\") pod \"be657f77-eca5-487a-b019-9e9370c90449\" (UID: \"be657f77-eca5-487a-b019-9e9370c90449\") " Oct 13 06:52:28.348298 kubelet[2977]: I1013 06:52:28.348077 2977 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be657f77-eca5-487a-b019-9e9370c90449-whisker-backend-key-pair\") pod \"be657f77-eca5-487a-b019-9e9370c90449\" (UID: \"be657f77-eca5-487a-b019-9e9370c90449\") " Oct 13 06:52:28.363766 kubelet[2977]: I1013 06:52:28.363706 2977 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be657f77-eca5-487a-b019-9e9370c90449-kube-api-access-v7r6g" (OuterVolumeSpecName: "kube-api-access-v7r6g") pod "be657f77-eca5-487a-b019-9e9370c90449" (UID: "be657f77-eca5-487a-b019-9e9370c90449"). InnerVolumeSpecName "kube-api-access-v7r6g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 06:52:28.364322 kubelet[2977]: I1013 06:52:28.364292 2977 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be657f77-eca5-487a-b019-9e9370c90449-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "be657f77-eca5-487a-b019-9e9370c90449" (UID: "be657f77-eca5-487a-b019-9e9370c90449"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 06:52:28.366931 kubelet[2977]: I1013 06:52:28.366799 2977 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be657f77-eca5-487a-b019-9e9370c90449-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "be657f77-eca5-487a-b019-9e9370c90449" (UID: "be657f77-eca5-487a-b019-9e9370c90449"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 06:52:28.426405 systemd[1]: var-lib-kubelet-pods-be657f77\x2deca5\x2d487a\x2db019\x2d9e9370c90449-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7r6g.mount: Deactivated successfully. Oct 13 06:52:28.426541 systemd[1]: var-lib-kubelet-pods-be657f77\x2deca5\x2d487a\x2db019\x2d9e9370c90449-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 06:52:28.458570 kubelet[2977]: I1013 06:52:28.458461 2977 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7r6g\" (UniqueName: \"kubernetes.io/projected/be657f77-eca5-487a-b019-9e9370c90449-kube-api-access-v7r6g\") on node \"srv-ntuey.gb1.brightbox.com\" DevicePath \"\"" Oct 13 06:52:28.458570 kubelet[2977]: I1013 06:52:28.458530 2977 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be657f77-eca5-487a-b019-9e9370c90449-whisker-ca-bundle\") on node \"srv-ntuey.gb1.brightbox.com\" DevicePath \"\"" Oct 13 06:52:28.458570 kubelet[2977]: I1013 06:52:28.458542 2977 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be657f77-eca5-487a-b019-9e9370c90449-whisker-backend-key-pair\") on node \"srv-ntuey.gb1.brightbox.com\" DevicePath \"\"" Oct 13 06:52:28.812218 systemd[1]: Removed slice kubepods-besteffort-podbe657f77_eca5_487a_b019_9e9370c90449.slice - libcontainer container kubepods-besteffort-podbe657f77_eca5_487a_b019_9e9370c90449.slice. Oct 13 06:52:28.838881 kubelet[2977]: I1013 06:52:28.835486 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jcbjn" podStartSLOduration=2.208104325 podStartE2EDuration="23.833018217s" podCreationTimestamp="2025-10-13 06:52:05 +0000 UTC" firstStartedPulling="2025-10-13 06:52:05.938871696 +0000 UTC m=+20.789253232" lastFinishedPulling="2025-10-13 06:52:27.563785592 +0000 UTC m=+42.414167124" observedRunningTime="2025-10-13 06:52:28.827125932 +0000 UTC m=+43.677507488" watchObservedRunningTime="2025-10-13 06:52:28.833018217 +0000 UTC m=+43.683399776" Oct 13 06:52:28.962765 systemd[1]: Created slice kubepods-besteffort-pod5160b85a_ad83_4639_aa51_0d5bd2f1f636.slice - libcontainer container kubepods-besteffort-pod5160b85a_ad83_4639_aa51_0d5bd2f1f636.slice. Oct 13 06:52:29.063809 kubelet[2977]: I1013 06:52:29.063652 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prrpc\" (UniqueName: \"kubernetes.io/projected/5160b85a-ad83-4639-aa51-0d5bd2f1f636-kube-api-access-prrpc\") pod \"whisker-57dc657b48-v4cqs\" (UID: \"5160b85a-ad83-4639-aa51-0d5bd2f1f636\") " pod="calico-system/whisker-57dc657b48-v4cqs" Oct 13 06:52:29.063809 kubelet[2977]: I1013 06:52:29.063734 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5160b85a-ad83-4639-aa51-0d5bd2f1f636-whisker-backend-key-pair\") pod \"whisker-57dc657b48-v4cqs\" (UID: \"5160b85a-ad83-4639-aa51-0d5bd2f1f636\") " pod="calico-system/whisker-57dc657b48-v4cqs" Oct 13 06:52:29.063809 kubelet[2977]: I1013 06:52:29.063769 2977 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5160b85a-ad83-4639-aa51-0d5bd2f1f636-whisker-ca-bundle\") pod \"whisker-57dc657b48-v4cqs\" (UID: \"5160b85a-ad83-4639-aa51-0d5bd2f1f636\") " pod="calico-system/whisker-57dc657b48-v4cqs" Oct 13 06:52:29.106430 containerd[1656]: time="2025-10-13T06:52:29.106375629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\" id:\"1da870e6ed1983686243bcf518a642e410898efbe351aef5f36860d3623ae172\" pid:4082 exit_status:1 exited_at:{seconds:1760338349 nanos:86269628}" Oct 13 06:52:29.276780 containerd[1656]: time="2025-10-13T06:52:29.276463326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57dc657b48-v4cqs,Uid:5160b85a-ad83-4639-aa51-0d5bd2f1f636,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:29.448398 kubelet[2977]: I1013 06:52:29.448329 2977 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be657f77-eca5-487a-b019-9e9370c90449" path="/var/lib/kubelet/pods/be657f77-eca5-487a-b019-9e9370c90449/volumes" Oct 13 06:52:29.578196 systemd-networkd[1575]: calib345960ac53: Link UP Oct 13 06:52:29.583611 systemd-networkd[1575]: calib345960ac53: Gained carrier Oct 13 06:52:29.601196 containerd[1656]: 2025-10-13 06:52:29.339 [INFO][4101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 06:52:29.601196 containerd[1656]: 2025-10-13 06:52:29.360 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0 whisker-57dc657b48- calico-system 5160b85a-ad83-4639-aa51-0d5bd2f1f636 945 0 2025-10-13 06:52:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57dc657b48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com whisker-57dc657b48-v4cqs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib345960ac53 [] [] }} ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-" Oct 13 06:52:29.601196 containerd[1656]: 2025-10-13 06:52:29.360 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" Oct 13 06:52:29.601196 containerd[1656]: 2025-10-13 06:52:29.470 [INFO][4109] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" HandleID="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Workload="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.472 [INFO][4109] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" HandleID="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Workload="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031e4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ntuey.gb1.brightbox.com", "pod":"whisker-57dc657b48-v4cqs", "timestamp":"2025-10-13 06:52:29.470886536 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.472 [INFO][4109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.472 [INFO][4109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.472 [INFO][4109] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.486 [INFO][4109] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.494 [INFO][4109] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.501 [INFO][4109] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.506 [INFO][4109] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.601545 containerd[1656]: 2025-10-13 06:52:29.510 [INFO][4109] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.602207 containerd[1656]: 2025-10-13 06:52:29.511 [INFO][4109] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.602207 containerd[1656]: 2025-10-13 06:52:29.515 [INFO][4109] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23 Oct 13 06:52:29.602207 containerd[1656]: 2025-10-13 06:52:29.532 [INFO][4109] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.602207 containerd[1656]: 2025-10-13 06:52:29.546 [INFO][4109] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.193/26] block=192.168.100.192/26 handle="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.602207 containerd[1656]: 2025-10-13 06:52:29.546 [INFO][4109] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.193/26] handle="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:29.602207 containerd[1656]: 2025-10-13 06:52:29.546 [INFO][4109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:29.602207 containerd[1656]: 2025-10-13 06:52:29.546 [INFO][4109] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.193/26] IPv6=[] ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" HandleID="k8s-pod-network.767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Workload="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" Oct 13 06:52:29.602398 containerd[1656]: 2025-10-13 06:52:29.552 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0", GenerateName:"whisker-57dc657b48-", Namespace:"calico-system", SelfLink:"", UID:"5160b85a-ad83-4639-aa51-0d5bd2f1f636", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57dc657b48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"whisker-57dc657b48-v4cqs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib345960ac53", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:29.602398 containerd[1656]: 2025-10-13 06:52:29.552 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.193/32] ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" Oct 13 06:52:29.602987 containerd[1656]: 2025-10-13 06:52:29.552 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib345960ac53 ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" Oct 13 06:52:29.602987 containerd[1656]: 2025-10-13 06:52:29.572 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" Oct 13 06:52:29.603138 containerd[1656]: 2025-10-13 06:52:29.574 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0", GenerateName:"whisker-57dc657b48-", Namespace:"calico-system", SelfLink:"", UID:"5160b85a-ad83-4639-aa51-0d5bd2f1f636", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57dc657b48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23", Pod:"whisker-57dc657b48-v4cqs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib345960ac53", MAC:"22:0c:a8:bf:d6:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:29.603571 containerd[1656]: 2025-10-13 06:52:29.595 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" Namespace="calico-system" Pod="whisker-57dc657b48-v4cqs" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-whisker--57dc657b48--v4cqs-eth0" Oct 13 06:52:29.852044 containerd[1656]: time="2025-10-13T06:52:29.851857782Z" level=info msg="connecting to shim 767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23" address="unix:///run/containerd/s/9194806e45e641459be2a75ffd93cfe9f053a2b9c969954a8d83a4f17d156734" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:29.935339 systemd[1]: Started cri-containerd-767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23.scope - libcontainer container 767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23. Oct 13 06:52:30.097688 containerd[1656]: time="2025-10-13T06:52:30.097629960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57dc657b48-v4cqs,Uid:5160b85a-ad83-4639-aa51-0d5bd2f1f636,Namespace:calico-system,Attempt:0,} returns sandbox id \"767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23\"" Oct 13 06:52:30.104862 containerd[1656]: time="2025-10-13T06:52:30.104622679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 06:52:30.251633 containerd[1656]: time="2025-10-13T06:52:30.251452640Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\" id:\"54db41e26c5a7b4d6678c66390eb7b724d2cebeb683f9d52d43829b3046b23f9\" pid:4204 exit_status:1 exited_at:{seconds:1760338350 nanos:250355208}" Oct 13 06:52:30.445464 containerd[1656]: time="2025-10-13T06:52:30.444741951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pb7zf,Uid:3eba3a7e-78fe-4123-ad80-ff9b190e6fbb,Namespace:kube-system,Attempt:0,}" Oct 13 06:52:30.623340 systemd-networkd[1575]: cali26925f29954: Link UP Oct 13 06:52:30.623641 systemd-networkd[1575]: cali26925f29954: Gained carrier Oct 13 06:52:30.652173 containerd[1656]: 2025-10-13 06:52:30.515 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0 coredns-674b8bbfcf- kube-system 3eba3a7e-78fe-4123-ad80-ff9b190e6fbb 865 0 2025-10-13 06:51:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com coredns-674b8bbfcf-pb7zf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26925f29954 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-" Oct 13 06:52:30.652173 containerd[1656]: 2025-10-13 06:52:30.516 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" Oct 13 06:52:30.652173 containerd[1656]: 2025-10-13 06:52:30.569 [INFO][4325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" HandleID="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Workload="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.570 [INFO][4325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" HandleID="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Workload="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332a10), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ntuey.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-pb7zf", "timestamp":"2025-10-13 06:52:30.56985767 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.570 [INFO][4325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.570 [INFO][4325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.570 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.578 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.585 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.590 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.594 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.652478 containerd[1656]: 2025-10-13 06:52:30.596 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.653040 containerd[1656]: 2025-10-13 06:52:30.596 [INFO][4325] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.653040 containerd[1656]: 2025-10-13 06:52:30.598 [INFO][4325] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e Oct 13 06:52:30.653040 containerd[1656]: 2025-10-13 06:52:30.604 [INFO][4325] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.653040 containerd[1656]: 2025-10-13 06:52:30.611 [INFO][4325] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.194/26] block=192.168.100.192/26 handle="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.653040 containerd[1656]: 2025-10-13 06:52:30.612 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.194/26] handle="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:30.653040 containerd[1656]: 2025-10-13 06:52:30.612 [INFO][4325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:30.653040 containerd[1656]: 2025-10-13 06:52:30.612 [INFO][4325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.194/26] IPv6=[] ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" HandleID="k8s-pod-network.db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Workload="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" Oct 13 06:52:30.654401 containerd[1656]: 2025-10-13 06:52:30.618 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3eba3a7e-78fe-4123-ad80-ff9b190e6fbb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-pb7zf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26925f29954", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:30.654401 containerd[1656]: 2025-10-13 06:52:30.618 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.194/32] ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" Oct 13 06:52:30.654401 containerd[1656]: 2025-10-13 06:52:30.619 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26925f29954 ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" Oct 13 06:52:30.654401 containerd[1656]: 2025-10-13 06:52:30.622 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" Oct 13 06:52:30.654401 containerd[1656]: 2025-10-13 06:52:30.625 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3eba3a7e-78fe-4123-ad80-ff9b190e6fbb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e", Pod:"coredns-674b8bbfcf-pb7zf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26925f29954", MAC:"6a:bc:81:a3:6f:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:30.654401 containerd[1656]: 2025-10-13 06:52:30.643 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pb7zf" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pb7zf-eth0" Oct 13 06:52:30.699075 containerd[1656]: time="2025-10-13T06:52:30.698938047Z" level=info msg="connecting to shim db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e" address="unix:///run/containerd/s/9ab70d363c07c936b5100d2907166b2cf1c4e495ba0eeee41440793845d6bd7a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:30.749239 systemd[1]: Started cri-containerd-db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e.scope - libcontainer container db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e. Oct 13 06:52:30.813781 containerd[1656]: time="2025-10-13T06:52:30.813741045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pb7zf,Uid:3eba3a7e-78fe-4123-ad80-ff9b190e6fbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e\"" Oct 13 06:52:30.819339 containerd[1656]: time="2025-10-13T06:52:30.819306445Z" level=info msg="CreateContainer within sandbox \"db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 06:52:30.832159 containerd[1656]: time="2025-10-13T06:52:30.831571632Z" level=info msg="Container 13aa37a7f1232f9b02354829364e77f3288a5e303ab056f2abc5ac52cc77ea72: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:30.839601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3460616803.mount: Deactivated successfully. Oct 13 06:52:30.842997 containerd[1656]: time="2025-10-13T06:52:30.842941209Z" level=info msg="CreateContainer within sandbox \"db4195a5865573073afca7849ba036a18982707a439c711b18177d972487bb4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13aa37a7f1232f9b02354829364e77f3288a5e303ab056f2abc5ac52cc77ea72\"" Oct 13 06:52:30.844104 containerd[1656]: time="2025-10-13T06:52:30.844083988Z" level=info msg="StartContainer for \"13aa37a7f1232f9b02354829364e77f3288a5e303ab056f2abc5ac52cc77ea72\"" Oct 13 06:52:30.845143 containerd[1656]: time="2025-10-13T06:52:30.845095225Z" level=info msg="connecting to shim 13aa37a7f1232f9b02354829364e77f3288a5e303ab056f2abc5ac52cc77ea72" address="unix:///run/containerd/s/9ab70d363c07c936b5100d2907166b2cf1c4e495ba0eeee41440793845d6bd7a" protocol=ttrpc version=3 Oct 13 06:52:30.877308 systemd[1]: Started cri-containerd-13aa37a7f1232f9b02354829364e77f3288a5e303ab056f2abc5ac52cc77ea72.scope - libcontainer container 13aa37a7f1232f9b02354829364e77f3288a5e303ab056f2abc5ac52cc77ea72. Oct 13 06:52:30.919885 containerd[1656]: time="2025-10-13T06:52:30.919367127Z" level=info msg="StartContainer for \"13aa37a7f1232f9b02354829364e77f3288a5e303ab056f2abc5ac52cc77ea72\" returns successfully" Oct 13 06:52:30.960612 systemd-networkd[1575]: vxlan.calico: Link UP Oct 13 06:52:30.960619 systemd-networkd[1575]: vxlan.calico: Gained carrier Oct 13 06:52:31.049205 systemd-networkd[1575]: calib345960ac53: Gained IPv6LL Oct 13 06:52:31.445499 containerd[1656]: time="2025-10-13T06:52:31.445125520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649445cb55-nkpkd,Uid:f857710e-a3a5-4070-a288-09c149e7c12b,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:31.614599 systemd-networkd[1575]: calid558409963d: Link UP Oct 13 06:52:31.614807 systemd-networkd[1575]: calid558409963d: Gained carrier Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.508 [INFO][4499] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0 calico-kube-controllers-649445cb55- calico-system f857710e-a3a5-4070-a288-09c149e7c12b 864 0 2025-10-13 06:52:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:649445cb55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com calico-kube-controllers-649445cb55-nkpkd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid558409963d [] [] }} ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.508 [INFO][4499] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.565 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" HandleID="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.566 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" HandleID="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000259610), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ntuey.gb1.brightbox.com", "pod":"calico-kube-controllers-649445cb55-nkpkd", "timestamp":"2025-10-13 06:52:31.56537658 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.566 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.566 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.566 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.575 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.581 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.586 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.588 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.593 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.593 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.595 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494 Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.601 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.607 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.195/26] block=192.168.100.192/26 handle="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.607 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.195/26] handle="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.607 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:31.637591 containerd[1656]: 2025-10-13 06:52:31.607 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.195/26] IPv6=[] ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" HandleID="k8s-pod-network.843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" Oct 13 06:52:31.639007 containerd[1656]: 2025-10-13 06:52:31.610 [INFO][4499] cni-plugin/k8s.go 418: Populated endpoint ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0", GenerateName:"calico-kube-controllers-649445cb55-", Namespace:"calico-system", SelfLink:"", UID:"f857710e-a3a5-4070-a288-09c149e7c12b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649445cb55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-649445cb55-nkpkd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid558409963d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:31.639007 containerd[1656]: 2025-10-13 06:52:31.610 [INFO][4499] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.195/32] ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" Oct 13 06:52:31.639007 containerd[1656]: 2025-10-13 06:52:31.610 [INFO][4499] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid558409963d ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" Oct 13 06:52:31.639007 containerd[1656]: 2025-10-13 06:52:31.614 [INFO][4499] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" Oct 13 06:52:31.639007 containerd[1656]: 2025-10-13 06:52:31.615 [INFO][4499] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0", GenerateName:"calico-kube-controllers-649445cb55-", Namespace:"calico-system", SelfLink:"", UID:"f857710e-a3a5-4070-a288-09c149e7c12b", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"649445cb55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494", Pod:"calico-kube-controllers-649445cb55-nkpkd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid558409963d", MAC:"9a:88:26:d0:83:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:31.639007 containerd[1656]: 2025-10-13 06:52:31.629 [INFO][4499] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" Namespace="calico-system" Pod="calico-kube-controllers-649445cb55-nkpkd" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--kube--controllers--649445cb55--nkpkd-eth0" Oct 13 06:52:31.665001 containerd[1656]: time="2025-10-13T06:52:31.664954186Z" level=info msg="connecting to shim 843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494" address="unix:///run/containerd/s/e2eb54236029d61240f495c78cf3a5339b187c99af4779cd7e1faf7994cf69b0" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:31.703382 systemd[1]: Started cri-containerd-843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494.scope - libcontainer container 843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494. Oct 13 06:52:31.789883 containerd[1656]: time="2025-10-13T06:52:31.789810306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-649445cb55-nkpkd,Uid:f857710e-a3a5-4070-a288-09c149e7c12b,Namespace:calico-system,Attempt:0,} returns sandbox id \"843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494\"" Oct 13 06:52:31.815409 systemd-networkd[1575]: cali26925f29954: Gained IPv6LL Oct 13 06:52:31.879848 kubelet[2977]: I1013 06:52:31.877443 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pb7zf" podStartSLOduration=41.877404596 podStartE2EDuration="41.877404596s" podCreationTimestamp="2025-10-13 06:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:52:31.876051929 +0000 UTC m=+46.726433466" watchObservedRunningTime="2025-10-13 06:52:31.877404596 +0000 UTC m=+46.727786145" Oct 13 06:52:32.135289 systemd-networkd[1575]: vxlan.calico: Gained IPv6LL Oct 13 06:52:32.446928 containerd[1656]: time="2025-10-13T06:52:32.445950027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vjdvq,Uid:917edfcb-100c-4720-b0de-c02da5d69423,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:32.446928 containerd[1656]: time="2025-10-13T06:52:32.446240075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx6ng,Uid:13ea8fb3-c2c1-4c48-a5c9-318a743690ae,Namespace:kube-system,Attempt:0,}" Oct 13 06:52:32.606340 containerd[1656]: time="2025-10-13T06:52:32.606265599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=102695" Oct 13 06:52:32.609026 containerd[1656]: time="2025-10-13T06:52:32.608098870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.3\": failed to copy: read tcp [2a02:1348:17d:1773:24:19ff:fef4:5dce]:53700->[2606:50c0:8001::154]:443: read: connection reset by peer" Oct 13 06:52:32.613772 kubelet[2977]: E1013 06:52:32.613626 2977 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.3\": failed to copy: read tcp [2a02:1348:17d:1773:24:19ff:fef4:5dce]:53700->[2606:50c0:8001::154]:443: read: connection reset by peer" image="ghcr.io/flatcar/calico/whisker:v3.30.3" Oct 13 06:52:32.613883 kubelet[2977]: E1013 06:52:32.613802 2977 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.3\": failed to copy: read tcp [2a02:1348:17d:1773:24:19ff:fef4:5dce]:53700->[2606:50c0:8001::154]:443: read: connection reset by peer" image="ghcr.io/flatcar/calico/whisker:v3.30.3" Oct 13 06:52:32.615114 containerd[1656]: time="2025-10-13T06:52:32.614858193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 06:52:32.631067 kubelet[2977]: E1013 06:52:32.630560 2977 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.3,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.3,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:18d362b67f5744dc9f338997e8c95323,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prrpc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-57dc657b48-v4cqs_calico-system(5160b85a-ad83-4639-aa51-0d5bd2f1f636): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.3\": failed to copy: read tcp [2a02:1348:17d:1773:24:19ff:fef4:5dce]:53700->[2606:50c0:8001::154]:443: read: connection reset by peer" logger="UnhandledError" Oct 13 06:52:32.734411 systemd-networkd[1575]: cali28f0caae1c9: Link UP Oct 13 06:52:32.735696 systemd-networkd[1575]: cali28f0caae1c9: Gained carrier Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.548 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0 csi-node-driver- calico-system 917edfcb-100c-4720-b0de-c02da5d69423 743 0 2025-10-13 06:52:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com csi-node-driver-vjdvq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28f0caae1c9 [] [] }} ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.549 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.666 [INFO][4601] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" HandleID="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Workload="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.666 [INFO][4601] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" HandleID="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Workload="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332140), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ntuey.gb1.brightbox.com", "pod":"csi-node-driver-vjdvq", "timestamp":"2025-10-13 06:52:32.666185268 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.666 [INFO][4601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.666 [INFO][4601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.666 [INFO][4601] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.679 [INFO][4601] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.694 [INFO][4601] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.700 [INFO][4601] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.703 [INFO][4601] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.705 [INFO][4601] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.705 [INFO][4601] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.707 [INFO][4601] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803 Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.713 [INFO][4601] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.720 [INFO][4601] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.196/26] block=192.168.100.192/26 handle="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.720 [INFO][4601] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.196/26] handle="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.720 [INFO][4601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:32.757088 containerd[1656]: 2025-10-13 06:52:32.720 [INFO][4601] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.196/26] IPv6=[] ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" HandleID="k8s-pod-network.07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Workload="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" Oct 13 06:52:32.759791 containerd[1656]: 2025-10-13 06:52:32.725 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"917edfcb-100c-4720-b0de-c02da5d69423", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-vjdvq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28f0caae1c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:32.759791 containerd[1656]: 2025-10-13 06:52:32.725 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.196/32] ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" Oct 13 06:52:32.759791 containerd[1656]: 2025-10-13 06:52:32.725 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28f0caae1c9 ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" Oct 13 06:52:32.759791 containerd[1656]: 2025-10-13 06:52:32.736 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" Oct 13 06:52:32.759791 containerd[1656]: 2025-10-13 06:52:32.737 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"917edfcb-100c-4720-b0de-c02da5d69423", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803", Pod:"csi-node-driver-vjdvq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28f0caae1c9", MAC:"e6:ac:c8:45:e0:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:32.759791 containerd[1656]: 2025-10-13 06:52:32.754 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" Namespace="calico-system" Pod="csi-node-driver-vjdvq" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-csi--node--driver--vjdvq-eth0" Oct 13 06:52:32.795752 containerd[1656]: time="2025-10-13T06:52:32.794701522Z" level=info msg="connecting to shim 07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803" address="unix:///run/containerd/s/cdf604c2a18253300cc95555e76c487e01cae0783f3a19d4a3ad6920349d665e" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:32.861502 systemd[1]: Started cri-containerd-07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803.scope - libcontainer container 07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803. Oct 13 06:52:32.868243 systemd-networkd[1575]: cali2ed293e6ca4: Link UP Oct 13 06:52:32.869915 systemd-networkd[1575]: cali2ed293e6ca4: Gained carrier Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.560 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0 coredns-674b8bbfcf- kube-system 13ea8fb3-c2c1-4c48-a5c9-318a743690ae 868 0 2025-10-13 06:51:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com coredns-674b8bbfcf-rx6ng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ed293e6ca4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.561 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.665 [INFO][4606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" HandleID="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Workload="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.668 [INFO][4606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" HandleID="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Workload="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df030), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ntuey.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-rx6ng", "timestamp":"2025-10-13 06:52:32.665573349 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.668 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.721 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.721 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.777 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.798 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.810 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.813 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.816 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.816 [INFO][4606] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.818 [INFO][4606] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.832 [INFO][4606] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.850 [INFO][4606] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.197/26] block=192.168.100.192/26 handle="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.850 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.197/26] handle="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.850 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:32.901779 containerd[1656]: 2025-10-13 06:52:32.850 [INFO][4606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.197/26] IPv6=[] ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" HandleID="k8s-pod-network.72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Workload="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" Oct 13 06:52:32.903904 containerd[1656]: 2025-10-13 06:52:32.858 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"13ea8fb3-c2c1-4c48-a5c9-318a743690ae", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-rx6ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed293e6ca4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:32.903904 containerd[1656]: 2025-10-13 06:52:32.858 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.197/32] ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" Oct 13 06:52:32.903904 containerd[1656]: 2025-10-13 06:52:32.858 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ed293e6ca4 ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" Oct 13 06:52:32.903904 containerd[1656]: 2025-10-13 06:52:32.871 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" Oct 13 06:52:32.903904 containerd[1656]: 2025-10-13 06:52:32.873 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"13ea8fb3-c2c1-4c48-a5c9-318a743690ae", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 51, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c", Pod:"coredns-674b8bbfcf-rx6ng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.100.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed293e6ca4", MAC:"9e:99:55:2f:d0:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:32.903904 containerd[1656]: 2025-10-13 06:52:32.891 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rx6ng" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-coredns--674b8bbfcf--rx6ng-eth0" Oct 13 06:52:32.997836 containerd[1656]: time="2025-10-13T06:52:32.997715658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vjdvq,Uid:917edfcb-100c-4720-b0de-c02da5d69423,Namespace:calico-system,Attempt:0,} returns sandbox id \"07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803\"" Oct 13 06:52:32.999996 containerd[1656]: time="2025-10-13T06:52:32.999915555Z" level=info msg="connecting to shim 72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c" address="unix:///run/containerd/s/43a36a86822d007fe2d2588883bc6eaaae5156dca1b1fa1fcb56c6a8151a6388" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:33.040442 systemd[1]: Started cri-containerd-72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c.scope - libcontainer container 72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c. Oct 13 06:52:33.100203 containerd[1656]: time="2025-10-13T06:52:33.100096567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rx6ng,Uid:13ea8fb3-c2c1-4c48-a5c9-318a743690ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c\"" Oct 13 06:52:33.106054 containerd[1656]: time="2025-10-13T06:52:33.105294468Z" level=info msg="CreateContainer within sandbox \"72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 06:52:33.116498 containerd[1656]: time="2025-10-13T06:52:33.116448871Z" level=info msg="Container 7cad2b8324bcacb7385cce7d7fbd939aface56273674e5f0b3f6b4448846a185: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:33.123872 containerd[1656]: time="2025-10-13T06:52:33.123841301Z" level=info msg="CreateContainer within sandbox \"72bba68650491fbaf4d7fcaa7e8a1bcaffd3ad57e8e144a4f53d16397927ab9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cad2b8324bcacb7385cce7d7fbd939aface56273674e5f0b3f6b4448846a185\"" Oct 13 06:52:33.125596 containerd[1656]: time="2025-10-13T06:52:33.124953088Z" level=info msg="StartContainer for \"7cad2b8324bcacb7385cce7d7fbd939aface56273674e5f0b3f6b4448846a185\"" Oct 13 06:52:33.127136 containerd[1656]: time="2025-10-13T06:52:33.127113664Z" level=info msg="connecting to shim 7cad2b8324bcacb7385cce7d7fbd939aface56273674e5f0b3f6b4448846a185" address="unix:///run/containerd/s/43a36a86822d007fe2d2588883bc6eaaae5156dca1b1fa1fcb56c6a8151a6388" protocol=ttrpc version=3 Oct 13 06:52:33.166588 systemd[1]: Started cri-containerd-7cad2b8324bcacb7385cce7d7fbd939aface56273674e5f0b3f6b4448846a185.scope - libcontainer container 7cad2b8324bcacb7385cce7d7fbd939aface56273674e5f0b3f6b4448846a185. Oct 13 06:52:33.222373 systemd-networkd[1575]: calid558409963d: Gained IPv6LL Oct 13 06:52:33.229643 containerd[1656]: time="2025-10-13T06:52:33.229610310Z" level=info msg="StartContainer for \"7cad2b8324bcacb7385cce7d7fbd939aface56273674e5f0b3f6b4448846a185\" returns successfully" Oct 13 06:52:33.446330 containerd[1656]: time="2025-10-13T06:52:33.446063144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-dqwm5,Uid:61a0959f-cd39-468e-89f5-b6905a773190,Namespace:calico-apiserver,Attempt:0,}" Oct 13 06:52:33.628824 systemd-networkd[1575]: cali21b93f5069f: Link UP Oct 13 06:52:33.631067 systemd-networkd[1575]: cali21b93f5069f: Gained carrier Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.511 [INFO][4759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0 calico-apiserver-589b9ff459- calico-apiserver 61a0959f-cd39-468e-89f5-b6905a773190 866 0 2025-10-13 06:52:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589b9ff459 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com calico-apiserver-589b9ff459-dqwm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali21b93f5069f [] [] }} ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.511 [INFO][4759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.570 [INFO][4770] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" HandleID="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.571 [INFO][4770] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" HandleID="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-ntuey.gb1.brightbox.com", "pod":"calico-apiserver-589b9ff459-dqwm5", "timestamp":"2025-10-13 06:52:33.570807144 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.571 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.571 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.571 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.580 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.586 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.593 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.594 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.599 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.599 [INFO][4770] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.603 [INFO][4770] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71 Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.611 [INFO][4770] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.619 [INFO][4770] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.198/26] block=192.168.100.192/26 handle="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.619 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.198/26] handle="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.619 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:33.654842 containerd[1656]: 2025-10-13 06:52:33.619 [INFO][4770] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.198/26] IPv6=[] ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" HandleID="k8s-pod-network.25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" Oct 13 06:52:33.658056 containerd[1656]: 2025-10-13 06:52:33.623 [INFO][4759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0", GenerateName:"calico-apiserver-589b9ff459-", Namespace:"calico-apiserver", SelfLink:"", UID:"61a0959f-cd39-468e-89f5-b6905a773190", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b9ff459", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-589b9ff459-dqwm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21b93f5069f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:33.658056 containerd[1656]: 2025-10-13 06:52:33.623 [INFO][4759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.198/32] ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" Oct 13 06:52:33.658056 containerd[1656]: 2025-10-13 06:52:33.623 [INFO][4759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21b93f5069f ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" Oct 13 06:52:33.658056 containerd[1656]: 2025-10-13 06:52:33.631 [INFO][4759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" Oct 13 06:52:33.658056 containerd[1656]: 2025-10-13 06:52:33.632 [INFO][4759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0", GenerateName:"calico-apiserver-589b9ff459-", Namespace:"calico-apiserver", SelfLink:"", UID:"61a0959f-cd39-468e-89f5-b6905a773190", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b9ff459", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71", Pod:"calico-apiserver-589b9ff459-dqwm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali21b93f5069f", MAC:"3e:3f:82:52:b0:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:33.658056 containerd[1656]: 2025-10-13 06:52:33.649 [INFO][4759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-dqwm5" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--dqwm5-eth0" Oct 13 06:52:33.681734 containerd[1656]: time="2025-10-13T06:52:33.681684201Z" level=info msg="connecting to shim 25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71" address="unix:///run/containerd/s/ba74087dd1399b1739be9216ab72182dac6c3ea942db85c3b39d6101a6de82c3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:33.712324 systemd[1]: Started cri-containerd-25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71.scope - libcontainer container 25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71. Oct 13 06:52:33.771903 containerd[1656]: time="2025-10-13T06:52:33.771777920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-dqwm5,Uid:61a0959f-cd39-468e-89f5-b6905a773190,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71\"" Oct 13 06:52:33.782072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460982277.mount: Deactivated successfully. Oct 13 06:52:33.884108 kubelet[2977]: I1013 06:52:33.883898 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rx6ng" podStartSLOduration=43.883876533 podStartE2EDuration="43.883876533s" podCreationTimestamp="2025-10-13 06:51:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 06:52:33.868682987 +0000 UTC m=+48.719064641" watchObservedRunningTime="2025-10-13 06:52:33.883876533 +0000 UTC m=+48.734258091" Oct 13 06:52:34.374980 systemd-networkd[1575]: cali28f0caae1c9: Gained IPv6LL Oct 13 06:52:34.445899 containerd[1656]: time="2025-10-13T06:52:34.445453176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nz6q2,Uid:126b56cc-c792-434a-81d4-98cb03e7ccc5,Namespace:calico-system,Attempt:0,}" Oct 13 06:52:34.656348 systemd-networkd[1575]: califb0cc284b33: Link UP Oct 13 06:52:34.656966 systemd-networkd[1575]: califb0cc284b33: Gained carrier Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.530 [INFO][4837] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0 goldmane-54d579b49d- calico-system 126b56cc-c792-434a-81d4-98cb03e7ccc5 870 0 2025-10-13 06:52:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com goldmane-54d579b49d-nz6q2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califb0cc284b33 [] [] }} ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.530 [INFO][4837] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.571 [INFO][4850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" HandleID="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Workload="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.571 [INFO][4850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" HandleID="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Workload="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ntuey.gb1.brightbox.com", "pod":"goldmane-54d579b49d-nz6q2", "timestamp":"2025-10-13 06:52:34.571072919 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.571 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.571 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.571 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.589 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.597 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.609 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.615 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.620 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.620 [INFO][4850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.624 [INFO][4850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625 Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.633 [INFO][4850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.645 [INFO][4850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.199/26] block=192.168.100.192/26 handle="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.645 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.199/26] handle="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.645 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:34.691724 containerd[1656]: 2025-10-13 06:52:34.645 [INFO][4850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.199/26] IPv6=[] ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" HandleID="k8s-pod-network.663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Workload="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" Oct 13 06:52:34.695480 containerd[1656]: 2025-10-13 06:52:34.651 [INFO][4837] cni-plugin/k8s.go 418: Populated endpoint ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"126b56cc-c792-434a-81d4-98cb03e7ccc5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-54d579b49d-nz6q2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califb0cc284b33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:34.695480 containerd[1656]: 2025-10-13 06:52:34.651 [INFO][4837] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.199/32] ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" Oct 13 06:52:34.695480 containerd[1656]: 2025-10-13 06:52:34.651 [INFO][4837] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb0cc284b33 ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" Oct 13 06:52:34.695480 containerd[1656]: 2025-10-13 06:52:34.658 [INFO][4837] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" Oct 13 06:52:34.695480 containerd[1656]: 2025-10-13 06:52:34.658 [INFO][4837] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"126b56cc-c792-434a-81d4-98cb03e7ccc5", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625", Pod:"goldmane-54d579b49d-nz6q2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.100.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califb0cc284b33", MAC:"86:cc:10:2a:fb:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:34.695480 containerd[1656]: 2025-10-13 06:52:34.684 [INFO][4837] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" Namespace="calico-system" Pod="goldmane-54d579b49d-nz6q2" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-goldmane--54d579b49d--nz6q2-eth0" Oct 13 06:52:34.695891 systemd-networkd[1575]: cali2ed293e6ca4: Gained IPv6LL Oct 13 06:52:34.733331 containerd[1656]: time="2025-10-13T06:52:34.733251411Z" level=info msg="connecting to shim 663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625" address="unix:///run/containerd/s/2cd99327dd5c49a24e5f2cbebcb0bddfd9040e99bd60a5f77cf121decff1c596" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:34.792322 systemd[1]: Started cri-containerd-663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625.scope - libcontainer container 663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625. Oct 13 06:52:34.865418 containerd[1656]: time="2025-10-13T06:52:34.865372775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-nz6q2,Uid:126b56cc-c792-434a-81d4-98cb03e7ccc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625\"" Oct 13 06:52:34.886386 systemd-networkd[1575]: cali21b93f5069f: Gained IPv6LL Oct 13 06:52:35.447741 containerd[1656]: time="2025-10-13T06:52:35.447691441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-6r9zx,Uid:cd591744-760e-42c9-87b8-ff71e15cfd43,Namespace:calico-apiserver,Attempt:0,}" Oct 13 06:52:35.650719 systemd-networkd[1575]: cali11a322cb5f3: Link UP Oct 13 06:52:35.653241 systemd-networkd[1575]: cali11a322cb5f3: Gained carrier Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.524 [INFO][4917] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0 calico-apiserver-589b9ff459- calico-apiserver cd591744-760e-42c9-87b8-ff71e15cfd43 869 0 2025-10-13 06:52:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:589b9ff459 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ntuey.gb1.brightbox.com calico-apiserver-589b9ff459-6r9zx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali11a322cb5f3 [] [] }} ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.524 [INFO][4917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.572 [INFO][4930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" HandleID="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.573 [INFO][4930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" HandleID="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-ntuey.gb1.brightbox.com", "pod":"calico-apiserver-589b9ff459-6r9zx", "timestamp":"2025-10-13 06:52:35.572686862 +0000 UTC"}, Hostname:"srv-ntuey.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.573 [INFO][4930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.573 [INFO][4930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.573 [INFO][4930] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ntuey.gb1.brightbox.com' Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.586 [INFO][4930] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.592 [INFO][4930] ipam/ipam.go 394: Looking up existing affinities for host host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.597 [INFO][4930] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.602 [INFO][4930] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.605 [INFO][4930] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.605 [INFO][4930] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.606 [INFO][4930] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711 Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.613 [INFO][4930] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.622 [INFO][4930] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.200/26] block=192.168.100.192/26 handle="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.622 [INFO][4930] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.200/26] handle="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" host="srv-ntuey.gb1.brightbox.com" Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.622 [INFO][4930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 06:52:35.691936 containerd[1656]: 2025-10-13 06:52:35.623 [INFO][4930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.200/26] IPv6=[] ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" HandleID="k8s-pod-network.8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Workload="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" Oct 13 06:52:35.694721 containerd[1656]: 2025-10-13 06:52:35.633 [INFO][4917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0", GenerateName:"calico-apiserver-589b9ff459-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd591744-760e-42c9-87b8-ff71e15cfd43", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b9ff459", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-589b9ff459-6r9zx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11a322cb5f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:35.694721 containerd[1656]: 2025-10-13 06:52:35.635 [INFO][4917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.200/32] ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" Oct 13 06:52:35.694721 containerd[1656]: 2025-10-13 06:52:35.635 [INFO][4917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11a322cb5f3 ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" Oct 13 06:52:35.694721 containerd[1656]: 2025-10-13 06:52:35.657 [INFO][4917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" Oct 13 06:52:35.694721 containerd[1656]: 2025-10-13 06:52:35.659 [INFO][4917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0", GenerateName:"calico-apiserver-589b9ff459-", Namespace:"calico-apiserver", SelfLink:"", UID:"cd591744-760e-42c9-87b8-ff71e15cfd43", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 6, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"589b9ff459", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ntuey.gb1.brightbox.com", ContainerID:"8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711", Pod:"calico-apiserver-589b9ff459-6r9zx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.100.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11a322cb5f3", MAC:"2a:54:eb:f5:ea:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 06:52:35.694721 containerd[1656]: 2025-10-13 06:52:35.681 [INFO][4917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" Namespace="calico-apiserver" Pod="calico-apiserver-589b9ff459-6r9zx" WorkloadEndpoint="srv--ntuey.gb1.brightbox.com-k8s-calico--apiserver--589b9ff459--6r9zx-eth0" Oct 13 06:52:35.755050 containerd[1656]: time="2025-10-13T06:52:35.754885101Z" level=info msg="connecting to shim 8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711" address="unix:///run/containerd/s/5f11391c4de7b8f381bb8144a3877369c1e5e1915daf7d4683cb117d2cd9a36a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 06:52:35.792434 systemd[1]: Started cri-containerd-8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711.scope - libcontainer container 8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711. Oct 13 06:52:35.867098 containerd[1656]: time="2025-10-13T06:52:35.866788653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-589b9ff459-6r9zx,Uid:cd591744-760e-42c9-87b8-ff71e15cfd43,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711\"" Oct 13 06:52:36.358612 systemd-networkd[1575]: califb0cc284b33: Gained IPv6LL Oct 13 06:52:36.796692 containerd[1656]: time="2025-10-13T06:52:36.796441926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:36.797858 containerd[1656]: time="2025-10-13T06:52:36.797526054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 06:52:36.797971 containerd[1656]: time="2025-10-13T06:52:36.797935731Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:36.800728 containerd[1656]: time="2025-10-13T06:52:36.800258327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:36.800953 containerd[1656]: time="2025-10-13T06:52:36.800926817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.186033797s" Oct 13 06:52:36.800996 containerd[1656]: time="2025-10-13T06:52:36.800959832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 06:52:36.850266 containerd[1656]: time="2025-10-13T06:52:36.849777017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 06:52:36.903345 containerd[1656]: time="2025-10-13T06:52:36.903292486Z" level=info msg="CreateContainer within sandbox \"843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 06:52:36.912064 containerd[1656]: time="2025-10-13T06:52:36.911275156Z" level=info msg="Container c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:36.919309 containerd[1656]: time="2025-10-13T06:52:36.919272723Z" level=info msg="CreateContainer within sandbox \"843b0022ca1816409034ab2cb046993f4fcab976f467bfa1a9497c4f79c98494\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\"" Oct 13 06:52:36.921515 containerd[1656]: time="2025-10-13T06:52:36.921465293Z" level=info msg="StartContainer for \"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\"" Oct 13 06:52:36.925596 containerd[1656]: time="2025-10-13T06:52:36.925537275Z" level=info msg="connecting to shim c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97" address="unix:///run/containerd/s/e2eb54236029d61240f495c78cf3a5339b187c99af4779cd7e1faf7994cf69b0" protocol=ttrpc version=3 Oct 13 06:52:36.936092 systemd-networkd[1575]: cali11a322cb5f3: Gained IPv6LL Oct 13 06:52:36.990977 systemd[1]: Started cri-containerd-c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97.scope - libcontainer container c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97. Oct 13 06:52:37.068893 containerd[1656]: time="2025-10-13T06:52:37.068764420Z" level=info msg="StartContainer for \"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\" returns successfully" Oct 13 06:52:38.001529 containerd[1656]: time="2025-10-13T06:52:38.001357860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\" id:\"c556e5be8e37c810ee8d939757c37d3be6f7bd0a1da801bd4e821c4bbe7a7bdd\" pid:5051 exited_at:{seconds:1760338357 nanos:999235574}" Oct 13 06:52:38.024964 kubelet[2977]: I1013 06:52:38.024384 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-649445cb55-nkpkd" podStartSLOduration=28.014357388 podStartE2EDuration="33.024307432s" podCreationTimestamp="2025-10-13 06:52:05 +0000 UTC" firstStartedPulling="2025-10-13 06:52:31.792942572 +0000 UTC m=+46.643324113" lastFinishedPulling="2025-10-13 06:52:36.802892622 +0000 UTC m=+51.653274157" observedRunningTime="2025-10-13 06:52:37.964122116 +0000 UTC m=+52.814503657" watchObservedRunningTime="2025-10-13 06:52:38.024307432 +0000 UTC m=+52.874689086" Oct 13 06:52:40.118236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3104847179.mount: Deactivated successfully. Oct 13 06:52:40.150185 containerd[1656]: time="2025-10-13T06:52:40.149557912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:40.153482 containerd[1656]: time="2025-10-13T06:52:40.151741033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 06:52:40.154063 containerd[1656]: time="2025-10-13T06:52:40.153985460Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:40.159419 containerd[1656]: time="2025-10-13T06:52:40.158600516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:40.159419 containerd[1656]: time="2025-10-13T06:52:40.159209910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.309380434s" Oct 13 06:52:40.159419 containerd[1656]: time="2025-10-13T06:52:40.159256052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 06:52:40.163447 containerd[1656]: time="2025-10-13T06:52:40.163380481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 06:52:40.172098 containerd[1656]: time="2025-10-13T06:52:40.171865903Z" level=info msg="CreateContainer within sandbox \"767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 06:52:40.198589 containerd[1656]: time="2025-10-13T06:52:40.197432260Z" level=info msg="Container 54139e79466c9b60f93e89bf65078cf2d51a112222a09f535dd02a5a79d287ec: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:40.206019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1146238478.mount: Deactivated successfully. Oct 13 06:52:40.216966 containerd[1656]: time="2025-10-13T06:52:40.216912088Z" level=info msg="CreateContainer within sandbox \"767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"54139e79466c9b60f93e89bf65078cf2d51a112222a09f535dd02a5a79d287ec\"" Oct 13 06:52:40.218616 containerd[1656]: time="2025-10-13T06:52:40.218563911Z" level=info msg="StartContainer for \"54139e79466c9b60f93e89bf65078cf2d51a112222a09f535dd02a5a79d287ec\"" Oct 13 06:52:40.220981 containerd[1656]: time="2025-10-13T06:52:40.220945572Z" level=info msg="connecting to shim 54139e79466c9b60f93e89bf65078cf2d51a112222a09f535dd02a5a79d287ec" address="unix:///run/containerd/s/9194806e45e641459be2a75ffd93cfe9f053a2b9c969954a8d83a4f17d156734" protocol=ttrpc version=3 Oct 13 06:52:40.258307 systemd[1]: Started cri-containerd-54139e79466c9b60f93e89bf65078cf2d51a112222a09f535dd02a5a79d287ec.scope - libcontainer container 54139e79466c9b60f93e89bf65078cf2d51a112222a09f535dd02a5a79d287ec. Oct 13 06:52:40.328227 containerd[1656]: time="2025-10-13T06:52:40.328124965Z" level=info msg="StartContainer for \"54139e79466c9b60f93e89bf65078cf2d51a112222a09f535dd02a5a79d287ec\" returns successfully" Oct 13 06:52:40.337259 kubelet[2977]: E1013 06:52:40.337171 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.3\\\": failed to copy: read tcp [2a02:1348:17d:1773:24:19ff:fef4:5dce]:53700->[2606:50c0:8001::154]:443: read: connection reset by peer\"" pod="calico-system/whisker-57dc657b48-v4cqs" podUID="5160b85a-ad83-4639-aa51-0d5bd2f1f636" Oct 13 06:52:40.945462 kubelet[2977]: E1013 06:52:40.945404 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.3\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.3\\\": failed to copy: read tcp [2a02:1348:17d:1773:24:19ff:fef4:5dce]:53700->[2606:50c0:8001::154]:443: read: connection reset by peer\"" pod="calico-system/whisker-57dc657b48-v4cqs" podUID="5160b85a-ad83-4639-aa51-0d5bd2f1f636" Oct 13 06:52:41.932053 kubelet[2977]: E1013 06:52:41.931716 2977 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.3\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.3\\\": failed to copy: read tcp [2a02:1348:17d:1773:24:19ff:fef4:5dce]:53700->[2606:50c0:8001::154]:443: read: connection reset by peer\"" pod="calico-system/whisker-57dc657b48-v4cqs" podUID="5160b85a-ad83-4639-aa51-0d5bd2f1f636" Oct 13 06:52:44.917977 containerd[1656]: time="2025-10-13T06:52:44.917867687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:44.918901 containerd[1656]: time="2025-10-13T06:52:44.918852733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 06:52:44.919531 containerd[1656]: time="2025-10-13T06:52:44.919501251Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:44.921478 containerd[1656]: time="2025-10-13T06:52:44.920967182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:44.922397 containerd[1656]: time="2025-10-13T06:52:44.922358266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 4.758938328s" Oct 13 06:52:44.922514 containerd[1656]: time="2025-10-13T06:52:44.922498049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 06:52:44.923994 containerd[1656]: time="2025-10-13T06:52:44.923871603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 06:52:44.935808 containerd[1656]: time="2025-10-13T06:52:44.935770201Z" level=info msg="CreateContainer within sandbox \"07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 06:52:44.992184 containerd[1656]: time="2025-10-13T06:52:44.991265715Z" level=info msg="Container 7895135feeb1072586fe10667184baceb90febf4e7830f9164f0a5fb03d718ff: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:45.015753 containerd[1656]: time="2025-10-13T06:52:45.015685104Z" level=info msg="CreateContainer within sandbox \"07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7895135feeb1072586fe10667184baceb90febf4e7830f9164f0a5fb03d718ff\"" Oct 13 06:52:45.019041 containerd[1656]: time="2025-10-13T06:52:45.016750768Z" level=info msg="StartContainer for \"7895135feeb1072586fe10667184baceb90febf4e7830f9164f0a5fb03d718ff\"" Oct 13 06:52:45.021256 containerd[1656]: time="2025-10-13T06:52:45.021117759Z" level=info msg="connecting to shim 7895135feeb1072586fe10667184baceb90febf4e7830f9164f0a5fb03d718ff" address="unix:///run/containerd/s/cdf604c2a18253300cc95555e76c487e01cae0783f3a19d4a3ad6920349d665e" protocol=ttrpc version=3 Oct 13 06:52:45.053592 systemd[1]: Started cri-containerd-7895135feeb1072586fe10667184baceb90febf4e7830f9164f0a5fb03d718ff.scope - libcontainer container 7895135feeb1072586fe10667184baceb90febf4e7830f9164f0a5fb03d718ff. Oct 13 06:52:45.106539 containerd[1656]: time="2025-10-13T06:52:45.106493057Z" level=info msg="StartContainer for \"7895135feeb1072586fe10667184baceb90febf4e7830f9164f0a5fb03d718ff\" returns successfully" Oct 13 06:52:48.627715 containerd[1656]: time="2025-10-13T06:52:48.627639373Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:48.628873 containerd[1656]: time="2025-10-13T06:52:48.628840279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 06:52:48.629614 containerd[1656]: time="2025-10-13T06:52:48.629579693Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:48.632183 containerd[1656]: time="2025-10-13T06:52:48.631973089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:48.639342 containerd[1656]: time="2025-10-13T06:52:48.639280494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.715110263s" Oct 13 06:52:48.639342 containerd[1656]: time="2025-10-13T06:52:48.639321162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 06:52:48.641876 containerd[1656]: time="2025-10-13T06:52:48.641855378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 06:52:48.646362 containerd[1656]: time="2025-10-13T06:52:48.646336090Z" level=info msg="CreateContainer within sandbox \"25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 06:52:48.653482 containerd[1656]: time="2025-10-13T06:52:48.653450704Z" level=info msg="Container 3f7d8d931cec4d1c55067903e9ce989da82e9797d704986db21b073416c95248: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:48.658291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1789506144.mount: Deactivated successfully. Oct 13 06:52:48.675299 containerd[1656]: time="2025-10-13T06:52:48.675212740Z" level=info msg="CreateContainer within sandbox \"25a93bf3f91e6f9dc3ca03a207028ce2f56837e9261614abd321b54a41132a71\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3f7d8d931cec4d1c55067903e9ce989da82e9797d704986db21b073416c95248\"" Oct 13 06:52:48.678188 containerd[1656]: time="2025-10-13T06:52:48.676899471Z" level=info msg="StartContainer for \"3f7d8d931cec4d1c55067903e9ce989da82e9797d704986db21b073416c95248\"" Oct 13 06:52:48.680743 containerd[1656]: time="2025-10-13T06:52:48.680677839Z" level=info msg="connecting to shim 3f7d8d931cec4d1c55067903e9ce989da82e9797d704986db21b073416c95248" address="unix:///run/containerd/s/ba74087dd1399b1739be9216ab72182dac6c3ea942db85c3b39d6101a6de82c3" protocol=ttrpc version=3 Oct 13 06:52:48.721388 systemd[1]: Started cri-containerd-3f7d8d931cec4d1c55067903e9ce989da82e9797d704986db21b073416c95248.scope - libcontainer container 3f7d8d931cec4d1c55067903e9ce989da82e9797d704986db21b073416c95248. Oct 13 06:52:48.819605 containerd[1656]: time="2025-10-13T06:52:48.819564121Z" level=info msg="StartContainer for \"3f7d8d931cec4d1c55067903e9ce989da82e9797d704986db21b073416c95248\" returns successfully" Oct 13 06:52:49.030839 kubelet[2977]: I1013 06:52:49.018834 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589b9ff459-dqwm5" podStartSLOduration=33.151802171 podStartE2EDuration="48.018815741s" podCreationTimestamp="2025-10-13 06:52:01 +0000 UTC" firstStartedPulling="2025-10-13 06:52:33.774441107 +0000 UTC m=+48.624822642" lastFinishedPulling="2025-10-13 06:52:48.641454677 +0000 UTC m=+63.491836212" observedRunningTime="2025-10-13 06:52:49.014670573 +0000 UTC m=+63.865052122" watchObservedRunningTime="2025-10-13 06:52:49.018815741 +0000 UTC m=+63.869197295" Oct 13 06:52:54.008674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1443909535.mount: Deactivated successfully. Oct 13 06:52:54.654603 containerd[1656]: time="2025-10-13T06:52:54.654473942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:54.656004 containerd[1656]: time="2025-10-13T06:52:54.655974880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 06:52:54.663915 containerd[1656]: time="2025-10-13T06:52:54.663734209Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:54.667366 containerd[1656]: time="2025-10-13T06:52:54.667340287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:54.669714 containerd[1656]: time="2025-10-13T06:52:54.669391294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.027419722s" Oct 13 06:52:54.669714 containerd[1656]: time="2025-10-13T06:52:54.669423973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 06:52:54.677007 containerd[1656]: time="2025-10-13T06:52:54.676985415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 06:52:54.713118 containerd[1656]: time="2025-10-13T06:52:54.713029106Z" level=info msg="CreateContainer within sandbox \"663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 06:52:54.732663 containerd[1656]: time="2025-10-13T06:52:54.732614077Z" level=info msg="Container ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:54.737710 containerd[1656]: time="2025-10-13T06:52:54.737678085Z" level=info msg="CreateContainer within sandbox \"663866e2aada2c863f6e4e9c7f61433864dfafd7c05354c5ae1a6fcdfbdea625\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\"" Oct 13 06:52:54.738331 containerd[1656]: time="2025-10-13T06:52:54.738249877Z" level=info msg="StartContainer for \"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\"" Oct 13 06:52:54.741031 containerd[1656]: time="2025-10-13T06:52:54.740978800Z" level=info msg="connecting to shim ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218" address="unix:///run/containerd/s/2cd99327dd5c49a24e5f2cbebcb0bddfd9040e99bd60a5f77cf121decff1c596" protocol=ttrpc version=3 Oct 13 06:52:54.842424 systemd[1]: Started cri-containerd-ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218.scope - libcontainer container ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218. Oct 13 06:52:55.000791 containerd[1656]: time="2025-10-13T06:52:54.999792403Z" level=info msg="StartContainer for \"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" returns successfully" Oct 13 06:52:55.104020 kubelet[2977]: I1013 06:52:55.102796 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-nz6q2" podStartSLOduration=31.286821467 podStartE2EDuration="51.095925086s" podCreationTimestamp="2025-10-13 06:52:04 +0000 UTC" firstStartedPulling="2025-10-13 06:52:34.867769723 +0000 UTC m=+49.718151258" lastFinishedPulling="2025-10-13 06:52:54.676873341 +0000 UTC m=+69.527254877" observedRunningTime="2025-10-13 06:52:55.094252971 +0000 UTC m=+69.944634531" watchObservedRunningTime="2025-10-13 06:52:55.095925086 +0000 UTC m=+69.946306645" Oct 13 06:52:55.146008 containerd[1656]: time="2025-10-13T06:52:55.145959285Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:55.148162 containerd[1656]: time="2025-10-13T06:52:55.146305376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 06:52:55.148162 containerd[1656]: time="2025-10-13T06:52:55.148016643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 470.911983ms" Oct 13 06:52:55.148162 containerd[1656]: time="2025-10-13T06:52:55.148045100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 06:52:55.168539 containerd[1656]: time="2025-10-13T06:52:55.168470958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 06:52:55.204282 containerd[1656]: time="2025-10-13T06:52:55.204226353Z" level=info msg="CreateContainer within sandbox \"8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 06:52:55.220996 containerd[1656]: time="2025-10-13T06:52:55.219520677Z" level=info msg="Container e541433b97617bcd1829cae4eb69e72de1bf5b4d2d568fa9ed7ab388e9b10d17: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:55.232875 containerd[1656]: time="2025-10-13T06:52:55.232778272Z" level=info msg="CreateContainer within sandbox \"8191ab10992b0a8cdf7fb0e5ab8b8c66589d03529cf038b63a71246e28325711\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e541433b97617bcd1829cae4eb69e72de1bf5b4d2d568fa9ed7ab388e9b10d17\"" Oct 13 06:52:55.235382 containerd[1656]: time="2025-10-13T06:52:55.234388590Z" level=info msg="StartContainer for \"e541433b97617bcd1829cae4eb69e72de1bf5b4d2d568fa9ed7ab388e9b10d17\"" Oct 13 06:52:55.238187 containerd[1656]: time="2025-10-13T06:52:55.237875941Z" level=info msg="connecting to shim e541433b97617bcd1829cae4eb69e72de1bf5b4d2d568fa9ed7ab388e9b10d17" address="unix:///run/containerd/s/5f11391c4de7b8f381bb8144a3877369c1e5e1915daf7d4683cb117d2cd9a36a" protocol=ttrpc version=3 Oct 13 06:52:55.269515 systemd[1]: Started cri-containerd-e541433b97617bcd1829cae4eb69e72de1bf5b4d2d568fa9ed7ab388e9b10d17.scope - libcontainer container e541433b97617bcd1829cae4eb69e72de1bf5b4d2d568fa9ed7ab388e9b10d17. Oct 13 06:52:55.371401 containerd[1656]: time="2025-10-13T06:52:55.371349354Z" level=info msg="StartContainer for \"e541433b97617bcd1829cae4eb69e72de1bf5b4d2d568fa9ed7ab388e9b10d17\" returns successfully" Oct 13 06:52:55.417484 containerd[1656]: time="2025-10-13T06:52:55.417126836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" id:\"20aa6d0cd6fa46ae447f809ace16ecd17f0c64242624661d7d65ed062f8bb3d1\" pid:5272 exit_status:1 exited_at:{seconds:1760338375 nanos:381530009}" Oct 13 06:52:56.139883 kubelet[2977]: I1013 06:52:56.138749 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-589b9ff459-6r9zx" podStartSLOduration=35.844631665 podStartE2EDuration="55.138729855s" podCreationTimestamp="2025-10-13 06:52:01 +0000 UTC" firstStartedPulling="2025-10-13 06:52:35.870711761 +0000 UTC m=+50.721093310" lastFinishedPulling="2025-10-13 06:52:55.164809965 +0000 UTC m=+70.015191500" observedRunningTime="2025-10-13 06:52:56.138453724 +0000 UTC m=+70.988835260" watchObservedRunningTime="2025-10-13 06:52:56.138729855 +0000 UTC m=+70.989111408" Oct 13 06:52:56.325455 containerd[1656]: time="2025-10-13T06:52:56.325329432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" id:\"9864fd62923b15e6781781569b262841c32de577e8db9d1d3b44da86a8a74e54\" pid:5337 exit_status:1 exited_at:{seconds:1760338376 nanos:324834526}" Oct 13 06:52:57.463114 containerd[1656]: time="2025-10-13T06:52:57.462997291Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" id:\"ef30c2bb0437ea745355f20caae7a618ffef0ffa9ae7f6ad8f5b10a0c2ee94b9\" pid:5364 exit_status:1 exited_at:{seconds:1760338377 nanos:462538895}" Oct 13 06:52:57.517494 containerd[1656]: time="2025-10-13T06:52:57.517058180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:57.517494 containerd[1656]: time="2025-10-13T06:52:57.517185979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 06:52:57.519657 containerd[1656]: time="2025-10-13T06:52:57.519563145Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:57.522367 containerd[1656]: time="2025-10-13T06:52:57.522338915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:57.528377 containerd[1656]: time="2025-10-13T06:52:57.528331160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.35980867s" Oct 13 06:52:57.528377 containerd[1656]: time="2025-10-13T06:52:57.528370363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 06:52:57.540918 containerd[1656]: time="2025-10-13T06:52:57.540862385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 06:52:57.553536 containerd[1656]: time="2025-10-13T06:52:57.553509767Z" level=info msg="CreateContainer within sandbox \"07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 06:52:57.573508 containerd[1656]: time="2025-10-13T06:52:57.573464644Z" level=info msg="Container 683c97d5688f8261ba04e40bdedbf321ca3fd1c37d48df24b6e5ee7ba0289e4d: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:57.587287 containerd[1656]: time="2025-10-13T06:52:57.587113871Z" level=info msg="CreateContainer within sandbox \"07eb98edf0bb7fb7038743d89945beafed653bf81585e67ecf51a7e4e1c88803\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"683c97d5688f8261ba04e40bdedbf321ca3fd1c37d48df24b6e5ee7ba0289e4d\"" Oct 13 06:52:57.590758 containerd[1656]: time="2025-10-13T06:52:57.590485957Z" level=info msg="StartContainer for \"683c97d5688f8261ba04e40bdedbf321ca3fd1c37d48df24b6e5ee7ba0289e4d\"" Oct 13 06:52:57.605427 containerd[1656]: time="2025-10-13T06:52:57.605359204Z" level=info msg="connecting to shim 683c97d5688f8261ba04e40bdedbf321ca3fd1c37d48df24b6e5ee7ba0289e4d" address="unix:///run/containerd/s/cdf604c2a18253300cc95555e76c487e01cae0783f3a19d4a3ad6920349d665e" protocol=ttrpc version=3 Oct 13 06:52:57.659407 systemd[1]: Started cri-containerd-683c97d5688f8261ba04e40bdedbf321ca3fd1c37d48df24b6e5ee7ba0289e4d.scope - libcontainer container 683c97d5688f8261ba04e40bdedbf321ca3fd1c37d48df24b6e5ee7ba0289e4d. Oct 13 06:52:57.755767 containerd[1656]: time="2025-10-13T06:52:57.755626252Z" level=info msg="StartContainer for \"683c97d5688f8261ba04e40bdedbf321ca3fd1c37d48df24b6e5ee7ba0289e4d\" returns successfully" Oct 13 06:52:58.182195 kubelet[2977]: I1013 06:52:58.180127 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vjdvq" podStartSLOduration=28.608818889 podStartE2EDuration="53.144039571s" podCreationTimestamp="2025-10-13 06:52:05 +0000 UTC" firstStartedPulling="2025-10-13 06:52:33.005022661 +0000 UTC m=+47.855404196" lastFinishedPulling="2025-10-13 06:52:57.540243343 +0000 UTC m=+72.390624878" observedRunningTime="2025-10-13 06:52:58.142021996 +0000 UTC m=+72.992403555" watchObservedRunningTime="2025-10-13 06:52:58.144039571 +0000 UTC m=+72.994421132" Oct 13 06:52:58.713474 kubelet[2977]: I1013 06:52:58.710723 2977 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 06:52:58.722121 kubelet[2977]: I1013 06:52:58.721802 2977 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 06:52:59.204348 containerd[1656]: time="2025-10-13T06:52:59.204297132Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\" id:\"014158f550342872ca7e607609f23faf11739742ee9f635faecf0be420b7ecfe\" pid:5420 exited_at:{seconds:1760338379 nanos:203539211}" Oct 13 06:52:59.327272 containerd[1656]: time="2025-10-13T06:52:59.327098337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:59.328585 containerd[1656]: time="2025-10-13T06:52:59.328560998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 06:52:59.328953 containerd[1656]: time="2025-10-13T06:52:59.328924959Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:59.333018 containerd[1656]: time="2025-10-13T06:52:59.332895352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 06:52:59.335836 containerd[1656]: time="2025-10-13T06:52:59.335732192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.794836783s" Oct 13 06:52:59.336120 containerd[1656]: time="2025-10-13T06:52:59.336022023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 06:52:59.344891 containerd[1656]: time="2025-10-13T06:52:59.344774653Z" level=info msg="CreateContainer within sandbox \"767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 06:52:59.357380 containerd[1656]: time="2025-10-13T06:52:59.357119196Z" level=info msg="Container 4b604c9b52fed0fe86150f9fbe5296554a6b01d5967f71e1d354612fad09f338: CDI devices from CRI Config.CDIDevices: []" Oct 13 06:52:59.375611 containerd[1656]: time="2025-10-13T06:52:59.375557223Z" level=info msg="CreateContainer within sandbox \"767bd29f4281adfcfbac09772618b4778e4d201867b73b5968714b59c0fd6a23\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4b604c9b52fed0fe86150f9fbe5296554a6b01d5967f71e1d354612fad09f338\"" Oct 13 06:52:59.376891 containerd[1656]: time="2025-10-13T06:52:59.376607893Z" level=info msg="StartContainer for \"4b604c9b52fed0fe86150f9fbe5296554a6b01d5967f71e1d354612fad09f338\"" Oct 13 06:52:59.377966 containerd[1656]: time="2025-10-13T06:52:59.377941232Z" level=info msg="connecting to shim 4b604c9b52fed0fe86150f9fbe5296554a6b01d5967f71e1d354612fad09f338" address="unix:///run/containerd/s/9194806e45e641459be2a75ffd93cfe9f053a2b9c969954a8d83a4f17d156734" protocol=ttrpc version=3 Oct 13 06:52:59.407411 systemd[1]: Started cri-containerd-4b604c9b52fed0fe86150f9fbe5296554a6b01d5967f71e1d354612fad09f338.scope - libcontainer container 4b604c9b52fed0fe86150f9fbe5296554a6b01d5967f71e1d354612fad09f338. Oct 13 06:52:59.501544 containerd[1656]: time="2025-10-13T06:52:59.501429834Z" level=info msg="StartContainer for \"4b604c9b52fed0fe86150f9fbe5296554a6b01d5967f71e1d354612fad09f338\" returns successfully" Oct 13 06:53:00.003253 containerd[1656]: time="2025-10-13T06:53:00.003191350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\" id:\"5b56b9d4308fff17ce888c242f00580f92490eb0ecb6a765d94e6e474f18cee7\" pid:5478 exited_at:{seconds:1760338380 nanos:2802110}" Oct 13 06:53:08.179843 containerd[1656]: time="2025-10-13T06:53:08.179760550Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\" id:\"514dda9ed1bd0c3fcde158f2a760bb70ff66d93f5263ef4934cf66fa6340a978\" pid:5506 exited_at:{seconds:1760338388 nanos:175027254}" Oct 13 06:53:18.807960 systemd[1]: Started sshd@9-10.244.93.206:22-139.178.68.195:55076.service - OpenSSH per-connection server daemon (139.178.68.195:55076). Oct 13 06:53:19.829912 sshd[5527]: Accepted publickey for core from 139.178.68.195 port 55076 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:19.831909 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:19.841807 systemd-logind[1638]: New session 12 of user core. Oct 13 06:53:19.856060 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 06:53:21.148181 sshd[5530]: Connection closed by 139.178.68.195 port 55076 Oct 13 06:53:21.146877 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Oct 13 06:53:21.173000 systemd[1]: sshd@9-10.244.93.206:22-139.178.68.195:55076.service: Deactivated successfully. Oct 13 06:53:21.177550 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 06:53:21.182622 systemd-logind[1638]: Session 12 logged out. Waiting for processes to exit. Oct 13 06:53:21.188307 systemd-logind[1638]: Removed session 12. Oct 13 06:53:26.318801 systemd[1]: Started sshd@10-10.244.93.206:22-139.178.68.195:55086.service - OpenSSH per-connection server daemon (139.178.68.195:55086). Oct 13 06:53:27.306311 sshd[5548]: Accepted publickey for core from 139.178.68.195 port 55086 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:27.309983 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:27.324557 systemd-logind[1638]: New session 13 of user core. Oct 13 06:53:27.330355 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 06:53:27.607309 containerd[1656]: time="2025-10-13T06:53:27.605017447Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" id:\"99b53fab5c2039eb7aa827e576ebe0c0ec7f634300f7235eb849b7994b4ca574\" pid:5563 exited_at:{seconds:1760338407 nanos:589638641}" Oct 13 06:53:27.819944 kubelet[2977]: I1013 06:53:27.766201 2977 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-57dc657b48-v4cqs" podStartSLOduration=30.492515953 podStartE2EDuration="59.729526935s" podCreationTimestamp="2025-10-13 06:52:28 +0000 UTC" firstStartedPulling="2025-10-13 06:52:30.100707896 +0000 UTC m=+44.951089435" lastFinishedPulling="2025-10-13 06:52:59.337718879 +0000 UTC m=+74.188100417" observedRunningTime="2025-10-13 06:53:00.198532908 +0000 UTC m=+75.048914560" watchObservedRunningTime="2025-10-13 06:53:27.729526935 +0000 UTC m=+102.579908502" Oct 13 06:53:28.280032 sshd[5570]: Connection closed by 139.178.68.195 port 55086 Oct 13 06:53:28.288625 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Oct 13 06:53:28.295372 systemd[1]: sshd@10-10.244.93.206:22-139.178.68.195:55086.service: Deactivated successfully. Oct 13 06:53:28.299792 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 06:53:28.301857 systemd-logind[1638]: Session 13 logged out. Waiting for processes to exit. Oct 13 06:53:28.306596 systemd-logind[1638]: Removed session 13. Oct 13 06:53:30.098545 containerd[1656]: time="2025-10-13T06:53:30.089196676Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\" id:\"2d91207b2828bd68de2af74b6d9cca80b262ca00a2ebb85c4d95166c1ac70d6d\" pid:5600 exited_at:{seconds:1760338410 nanos:88878505}" Oct 13 06:53:31.565941 containerd[1656]: time="2025-10-13T06:53:31.565898572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" id:\"10beb485be0964bdb3266c297fa47a08c54368c78c73ca0e5cf5177b9829ebd6\" pid:5626 exited_at:{seconds:1760338411 nanos:565363974}" Oct 13 06:53:33.438666 systemd[1]: Started sshd@11-10.244.93.206:22-139.178.68.195:53812.service - OpenSSH per-connection server daemon (139.178.68.195:53812). Oct 13 06:53:34.452000 sshd[5639]: Accepted publickey for core from 139.178.68.195 port 53812 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:34.457287 sshd-session[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:34.466253 systemd-logind[1638]: New session 14 of user core. Oct 13 06:53:34.474868 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 06:53:35.346832 sshd[5643]: Connection closed by 139.178.68.195 port 53812 Oct 13 06:53:35.347626 sshd-session[5639]: pam_unix(sshd:session): session closed for user core Oct 13 06:53:35.356721 systemd[1]: sshd@11-10.244.93.206:22-139.178.68.195:53812.service: Deactivated successfully. Oct 13 06:53:35.364003 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 06:53:35.367305 systemd-logind[1638]: Session 14 logged out. Waiting for processes to exit. Oct 13 06:53:35.371327 systemd-logind[1638]: Removed session 14. Oct 13 06:53:35.508253 systemd[1]: Started sshd@12-10.244.93.206:22-139.178.68.195:53826.service - OpenSSH per-connection server daemon (139.178.68.195:53826). Oct 13 06:53:36.437015 sshd[5656]: Accepted publickey for core from 139.178.68.195 port 53826 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:36.445386 sshd-session[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:36.456543 systemd-logind[1638]: New session 15 of user core. Oct 13 06:53:36.463172 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 06:53:37.256133 sshd[5659]: Connection closed by 139.178.68.195 port 53826 Oct 13 06:53:37.257125 sshd-session[5656]: pam_unix(sshd:session): session closed for user core Oct 13 06:53:37.263703 systemd[1]: sshd@12-10.244.93.206:22-139.178.68.195:53826.service: Deactivated successfully. Oct 13 06:53:37.267661 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 06:53:37.269641 systemd-logind[1638]: Session 15 logged out. Waiting for processes to exit. Oct 13 06:53:37.273605 systemd-logind[1638]: Removed session 15. Oct 13 06:53:37.414356 systemd[1]: Started sshd@13-10.244.93.206:22-139.178.68.195:44478.service - OpenSSH per-connection server daemon (139.178.68.195:44478). Oct 13 06:53:37.996924 containerd[1656]: time="2025-10-13T06:53:37.995904714Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\" id:\"476a168256608ed39400bab4a24611af831a435965d6e5806dbe0cdf12b77dfb\" pid:5684 exited_at:{seconds:1760338417 nanos:995327830}" Oct 13 06:53:38.338680 sshd[5669]: Accepted publickey for core from 139.178.68.195 port 44478 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:38.341786 sshd-session[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:38.351212 systemd-logind[1638]: New session 16 of user core. Oct 13 06:53:38.355327 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 06:53:39.078417 sshd[5693]: Connection closed by 139.178.68.195 port 44478 Oct 13 06:53:39.079794 sshd-session[5669]: pam_unix(sshd:session): session closed for user core Oct 13 06:53:39.091455 systemd-logind[1638]: Session 16 logged out. Waiting for processes to exit. Oct 13 06:53:39.092732 systemd[1]: sshd@13-10.244.93.206:22-139.178.68.195:44478.service: Deactivated successfully. Oct 13 06:53:39.096834 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 06:53:39.099530 systemd-logind[1638]: Removed session 16. Oct 13 06:53:44.238673 systemd[1]: Started sshd@14-10.244.93.206:22-139.178.68.195:44480.service - OpenSSH per-connection server daemon (139.178.68.195:44480). Oct 13 06:53:45.192625 sshd[5710]: Accepted publickey for core from 139.178.68.195 port 44480 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:45.194674 sshd-session[5710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:45.203275 systemd-logind[1638]: New session 17 of user core. Oct 13 06:53:45.211680 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 06:53:46.233172 sshd[5713]: Connection closed by 139.178.68.195 port 44480 Oct 13 06:53:46.239084 sshd-session[5710]: pam_unix(sshd:session): session closed for user core Oct 13 06:53:46.249689 systemd[1]: sshd@14-10.244.93.206:22-139.178.68.195:44480.service: Deactivated successfully. Oct 13 06:53:46.250254 systemd-logind[1638]: Session 17 logged out. Waiting for processes to exit. Oct 13 06:53:46.254126 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 06:53:46.259263 systemd-logind[1638]: Removed session 17. Oct 13 06:53:51.389999 systemd[1]: Started sshd@15-10.244.93.206:22-139.178.68.195:60214.service - OpenSSH per-connection server daemon (139.178.68.195:60214). Oct 13 06:53:52.338468 sshd[5732]: Accepted publickey for core from 139.178.68.195 port 60214 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:52.342121 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:52.354305 systemd-logind[1638]: New session 18 of user core. Oct 13 06:53:52.363351 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 06:53:53.200306 sshd[5739]: Connection closed by 139.178.68.195 port 60214 Oct 13 06:53:53.204354 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Oct 13 06:53:53.216369 systemd[1]: sshd@15-10.244.93.206:22-139.178.68.195:60214.service: Deactivated successfully. Oct 13 06:53:53.220844 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 06:53:53.222803 systemd-logind[1638]: Session 18 logged out. Waiting for processes to exit. Oct 13 06:53:53.225891 systemd-logind[1638]: Removed session 18. Oct 13 06:53:57.394826 containerd[1656]: time="2025-10-13T06:53:57.394734662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" id:\"e6151ca27401658db57816557206b7b228c4c5a4caa31e266e7d71d168eb9cf9\" pid:5763 exited_at:{seconds:1760338437 nanos:361459256}" Oct 13 06:53:58.365532 systemd[1]: Started sshd@16-10.244.93.206:22-139.178.68.195:51582.service - OpenSSH per-connection server daemon (139.178.68.195:51582). Oct 13 06:53:59.140052 containerd[1656]: time="2025-10-13T06:53:59.140004696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\" id:\"a0da58c755643c7f465dd2211a261be89f3a6c71457f52883063ac5f2dd0f6c3\" pid:5790 exited_at:{seconds:1760338439 nanos:139750677}" Oct 13 06:53:59.372205 sshd[5775]: Accepted publickey for core from 139.178.68.195 port 51582 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:53:59.375688 sshd-session[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:53:59.383438 systemd-logind[1638]: New session 19 of user core. Oct 13 06:53:59.389345 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 06:54:00.264994 containerd[1656]: time="2025-10-13T06:54:00.264943742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6352b9a6d93362d383bc30e0f4701caf6b8cc61d0d0abdd3e26ef639e8002964\" id:\"c5b10ea0b081d146a48b5d08aab61b20d3cb9a050ef3ebe4dc880bd709ccde3f\" pid:5813 exited_at:{seconds:1760338440 nanos:264495803}" Oct 13 06:54:00.731555 sshd[5800]: Connection closed by 139.178.68.195 port 51582 Oct 13 06:54:00.737296 sshd-session[5775]: pam_unix(sshd:session): session closed for user core Oct 13 06:54:00.754395 systemd[1]: sshd@16-10.244.93.206:22-139.178.68.195:51582.service: Deactivated successfully. Oct 13 06:54:00.759191 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 06:54:00.760561 systemd-logind[1638]: Session 19 logged out. Waiting for processes to exit. Oct 13 06:54:00.762942 systemd-logind[1638]: Removed session 19. Oct 13 06:54:00.888875 systemd[1]: Started sshd@17-10.244.93.206:22-139.178.68.195:51598.service - OpenSSH per-connection server daemon (139.178.68.195:51598). Oct 13 06:54:01.830033 sshd[5836]: Accepted publickey for core from 139.178.68.195 port 51598 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:54:01.833804 sshd-session[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:54:01.844975 systemd-logind[1638]: New session 20 of user core. Oct 13 06:54:01.853303 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 06:54:02.867682 sshd[5839]: Connection closed by 139.178.68.195 port 51598 Oct 13 06:54:02.869394 sshd-session[5836]: pam_unix(sshd:session): session closed for user core Oct 13 06:54:02.884744 systemd[1]: sshd@17-10.244.93.206:22-139.178.68.195:51598.service: Deactivated successfully. Oct 13 06:54:02.889453 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 06:54:02.891451 systemd-logind[1638]: Session 20 logged out. Waiting for processes to exit. Oct 13 06:54:02.895455 systemd-logind[1638]: Removed session 20. Oct 13 06:54:03.035285 systemd[1]: Started sshd@18-10.244.93.206:22-139.178.68.195:51602.service - OpenSSH per-connection server daemon (139.178.68.195:51602). Oct 13 06:54:04.007235 sshd[5849]: Accepted publickey for core from 139.178.68.195 port 51602 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:54:04.009876 sshd-session[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:54:04.018738 systemd-logind[1638]: New session 21 of user core. Oct 13 06:54:04.026463 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 06:54:05.660196 sshd[5853]: Connection closed by 139.178.68.195 port 51602 Oct 13 06:54:05.662801 sshd-session[5849]: pam_unix(sshd:session): session closed for user core Oct 13 06:54:05.668627 systemd[1]: sshd@18-10.244.93.206:22-139.178.68.195:51602.service: Deactivated successfully. Oct 13 06:54:05.676116 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 06:54:05.677577 systemd-logind[1638]: Session 21 logged out. Waiting for processes to exit. Oct 13 06:54:05.682452 systemd-logind[1638]: Removed session 21. Oct 13 06:54:05.815170 systemd[1]: Started sshd@19-10.244.93.206:22-139.178.68.195:51612.service - OpenSSH per-connection server daemon (139.178.68.195:51612). Oct 13 06:54:06.761133 sshd[5878]: Accepted publickey for core from 139.178.68.195 port 51612 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:54:06.763683 sshd-session[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:54:06.778638 systemd-logind[1638]: New session 22 of user core. Oct 13 06:54:06.787106 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 06:54:08.043972 containerd[1656]: time="2025-10-13T06:54:08.043677016Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c11effc6e61c48aeebf1cbd185d993f616f1dee1e85255f49c49952ff412ce97\" id:\"8833f1d058679459f5978f004c90542dddece6c581858e9bc8f45867d1cb21b2\" pid:5899 exited_at:{seconds:1760338448 nanos:42307640}" Oct 13 06:54:08.081942 sshd[5881]: Connection closed by 139.178.68.195 port 51612 Oct 13 06:54:08.083466 sshd-session[5878]: pam_unix(sshd:session): session closed for user core Oct 13 06:54:08.095876 systemd[1]: sshd@19-10.244.93.206:22-139.178.68.195:51612.service: Deactivated successfully. Oct 13 06:54:08.101670 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 06:54:08.103940 systemd-logind[1638]: Session 22 logged out. Waiting for processes to exit. Oct 13 06:54:08.107925 systemd-logind[1638]: Removed session 22. Oct 13 06:54:08.242573 systemd[1]: Started sshd@20-10.244.93.206:22-139.178.68.195:49350.service - OpenSSH per-connection server daemon (139.178.68.195:49350). Oct 13 06:54:09.189264 sshd[5912]: Accepted publickey for core from 139.178.68.195 port 49350 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:54:09.192489 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:54:09.202319 systemd-logind[1638]: New session 23 of user core. Oct 13 06:54:09.208355 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 06:54:09.964946 sshd[5915]: Connection closed by 139.178.68.195 port 49350 Oct 13 06:54:09.965602 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Oct 13 06:54:09.975905 systemd[1]: sshd@20-10.244.93.206:22-139.178.68.195:49350.service: Deactivated successfully. Oct 13 06:54:09.980387 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 06:54:09.983586 systemd-logind[1638]: Session 23 logged out. Waiting for processes to exit. Oct 13 06:54:09.985975 systemd-logind[1638]: Removed session 23. Oct 13 06:54:15.123669 systemd[1]: Started sshd@21-10.244.93.206:22-139.178.68.195:49364.service - OpenSSH per-connection server daemon (139.178.68.195:49364). Oct 13 06:54:16.078235 sshd[5942]: Accepted publickey for core from 139.178.68.195 port 49364 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:54:16.084700 sshd-session[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:54:16.098261 systemd-logind[1638]: New session 24 of user core. Oct 13 06:54:16.102327 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 06:54:16.897981 sshd[5945]: Connection closed by 139.178.68.195 port 49364 Oct 13 06:54:16.898451 sshd-session[5942]: pam_unix(sshd:session): session closed for user core Oct 13 06:54:16.918451 systemd[1]: sshd@21-10.244.93.206:22-139.178.68.195:49364.service: Deactivated successfully. Oct 13 06:54:16.923962 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 06:54:16.927754 systemd-logind[1638]: Session 24 logged out. Waiting for processes to exit. Oct 13 06:54:16.929031 systemd-logind[1638]: Removed session 24. Oct 13 06:54:22.066613 systemd[1]: Started sshd@22-10.244.93.206:22-139.178.68.195:52194.service - OpenSSH per-connection server daemon (139.178.68.195:52194). Oct 13 06:54:23.109809 sshd[5960]: Accepted publickey for core from 139.178.68.195 port 52194 ssh2: RSA SHA256:fRul8eAVig/XyA1eZza2GOOLGaWCYkSVMLwjSf7WhTM Oct 13 06:54:23.114722 sshd-session[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 06:54:23.126221 systemd-logind[1638]: New session 25 of user core. Oct 13 06:54:23.132299 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 13 06:54:24.556198 sshd[5963]: Connection closed by 139.178.68.195 port 52194 Oct 13 06:54:24.557079 sshd-session[5960]: pam_unix(sshd:session): session closed for user core Oct 13 06:54:24.575747 systemd[1]: sshd@22-10.244.93.206:22-139.178.68.195:52194.service: Deactivated successfully. Oct 13 06:54:24.582849 systemd[1]: session-25.scope: Deactivated successfully. Oct 13 06:54:24.589371 systemd-logind[1638]: Session 25 logged out. Waiting for processes to exit. Oct 13 06:54:24.591396 systemd-logind[1638]: Removed session 25. Oct 13 06:54:27.469825 containerd[1656]: time="2025-10-13T06:54:27.469761611Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ab3a18ae83372152306065331d693c721a4e3167c84725ae8ced1a63f85f8218\" id:\"93f41d7306d0d83fa9c4d50d53a9d681214fdd96a49f9b3d6c81bc7d4e53f222\" pid:5986 exited_at:{seconds:1760338467 nanos:468506410}"