Jul 7 09:03:03.960033 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 09:03:03.960091 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 09:03:03.960112 kernel: BIOS-provided physical RAM map: Jul 7 09:03:03.960123 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 09:03:03.960133 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 09:03:03.960143 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 09:03:03.960155 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 7 09:03:03.960166 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 7 09:03:03.960176 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 09:03:03.960186 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 09:03:03.960201 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 09:03:03.960212 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 09:03:03.960222 kernel: NX (Execute Disable) protection: active Jul 7 09:03:03.960233 kernel: APIC: Static calls initialized Jul 7 09:03:03.960245 kernel: SMBIOS 2.8 present. Jul 7 09:03:03.960257 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 7 09:03:03.960273 kernel: DMI: Memory slots populated: 1/1 Jul 7 09:03:03.960285 kernel: Hypervisor detected: KVM Jul 7 09:03:03.960296 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 09:03:03.960307 kernel: kvm-clock: using sched offset of 5598552835 cycles Jul 7 09:03:03.960319 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 09:03:03.960331 kernel: tsc: Detected 2499.998 MHz processor Jul 7 09:03:03.960342 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 09:03:03.960354 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 09:03:03.960365 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 7 09:03:03.960382 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 09:03:03.960393 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 09:03:03.960405 kernel: Using GB pages for direct mapping Jul 7 09:03:03.960416 kernel: ACPI: Early table checksum verification disabled Jul 7 09:03:03.960427 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 7 09:03:03.960439 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:03:03.960450 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:03:03.960462 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:03:03.960473 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 7 09:03:03.960489 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:03:03.960501 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:03:03.960512 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:03:03.960523 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:03:03.960535 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 7 09:03:03.960546 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 7 09:03:03.960566 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 7 09:03:03.960582 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 7 09:03:03.960594 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 7 09:03:03.960607 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 7 09:03:03.960618 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 7 09:03:03.960630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 09:03:03.960642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 09:03:03.960654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 7 09:03:03.960670 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jul 7 09:03:03.960683 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jul 7 09:03:03.960695 kernel: Zone ranges: Jul 7 09:03:03.960707 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 09:03:03.960719 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 7 09:03:03.960730 kernel: Normal empty Jul 7 09:03:03.960742 kernel: Device empty Jul 7 09:03:03.960754 kernel: Movable zone start for each node Jul 7 09:03:03.960766 kernel: Early memory node ranges Jul 7 09:03:03.960777 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 09:03:03.960794 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 7 09:03:03.960806 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 7 09:03:03.960818 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 09:03:03.960830 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 09:03:03.960842 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 7 09:03:03.960853 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 09:03:03.960865 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 09:03:03.960877 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 09:03:03.960889 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 09:03:03.960905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 09:03:03.962002 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 09:03:03.962023 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 09:03:03.962036 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 09:03:03.962048 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 09:03:03.962060 kernel: TSC deadline timer available Jul 7 09:03:03.962072 kernel: CPU topo: Max. logical packages: 16 Jul 7 09:03:03.962096 kernel: CPU topo: Max. logical dies: 16 Jul 7 09:03:03.962108 kernel: CPU topo: Max. dies per package: 1 Jul 7 09:03:03.962128 kernel: CPU topo: Max. threads per core: 1 Jul 7 09:03:03.962140 kernel: CPU topo: Num. cores per package: 1 Jul 7 09:03:03.962152 kernel: CPU topo: Num. threads per package: 1 Jul 7 09:03:03.962164 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jul 7 09:03:03.962176 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 09:03:03.962188 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 09:03:03.962200 kernel: Booting paravirtualized kernel on KVM Jul 7 09:03:03.962212 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 09:03:03.962225 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 09:03:03.962241 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 7 09:03:03.962254 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 7 09:03:03.962265 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 09:03:03.962277 kernel: kvm-guest: PV spinlocks enabled Jul 7 09:03:03.962289 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 09:03:03.962303 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 09:03:03.962323 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 09:03:03.962339 kernel: random: crng init done Jul 7 09:03:03.962357 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 09:03:03.962369 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 09:03:03.962381 kernel: Fallback order for Node 0: 0 Jul 7 09:03:03.962393 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jul 7 09:03:03.962411 kernel: Policy zone: DMA32 Jul 7 09:03:03.962423 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 09:03:03.962435 kernel: software IO TLB: area num 16. Jul 7 09:03:03.962447 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 09:03:03.962459 kernel: Kernel/User page tables isolation: enabled Jul 7 09:03:03.962476 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 09:03:03.962488 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 09:03:03.962500 kernel: Dynamic Preempt: voluntary Jul 7 09:03:03.962512 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 09:03:03.962525 kernel: rcu: RCU event tracing is enabled. Jul 7 09:03:03.962538 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 09:03:03.962550 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 09:03:03.962562 kernel: Rude variant of Tasks RCU enabled. Jul 7 09:03:03.962574 kernel: Tracing variant of Tasks RCU enabled. Jul 7 09:03:03.962586 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 09:03:03.962602 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 09:03:03.962615 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 09:03:03.962627 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 09:03:03.962639 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 09:03:03.962651 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 7 09:03:03.962663 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 09:03:03.962688 kernel: Console: colour VGA+ 80x25 Jul 7 09:03:03.962701 kernel: printk: legacy console [tty0] enabled Jul 7 09:03:03.962714 kernel: printk: legacy console [ttyS0] enabled Jul 7 09:03:03.962738 kernel: ACPI: Core revision 20240827 Jul 7 09:03:03.962750 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 09:03:03.962766 kernel: x2apic enabled Jul 7 09:03:03.962778 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 09:03:03.962803 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 7 09:03:03.962816 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 7 09:03:03.962829 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 09:03:03.962846 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 09:03:03.962858 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 09:03:03.962871 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 09:03:03.962883 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 09:03:03.962895 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 09:03:03.962908 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 09:03:03.962920 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 09:03:03.962933 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 09:03:03.965010 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 09:03:03.965027 kernel: MMIO Stale Data: Unknown: No mitigations Jul 7 09:03:03.965042 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 7 09:03:03.965062 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 09:03:03.965085 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 09:03:03.965105 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 09:03:03.965117 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 09:03:03.965130 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 09:03:03.965142 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 09:03:03.965155 kernel: Freeing SMP alternatives memory: 32K Jul 7 09:03:03.965167 kernel: pid_max: default: 32768 minimum: 301 Jul 7 09:03:03.965179 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 09:03:03.965192 kernel: landlock: Up and running. Jul 7 09:03:03.965204 kernel: SELinux: Initializing. Jul 7 09:03:03.965222 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 09:03:03.965235 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 09:03:03.965248 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 7 09:03:03.965260 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 7 09:03:03.965273 kernel: signal: max sigframe size: 1776 Jul 7 09:03:03.965286 kernel: rcu: Hierarchical SRCU implementation. Jul 7 09:03:03.965299 kernel: rcu: Max phase no-delay instances is 400. Jul 7 09:03:03.965311 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jul 7 09:03:03.965324 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 09:03:03.965341 kernel: smp: Bringing up secondary CPUs ... Jul 7 09:03:03.965354 kernel: smpboot: x86: Booting SMP configuration: Jul 7 09:03:03.965366 kernel: .... node #0, CPUs: #1 Jul 7 09:03:03.965379 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 09:03:03.965391 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 7 09:03:03.965405 kernel: Memory: 1895672K/2096616K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 194928K reserved, 0K cma-reserved) Jul 7 09:03:03.965418 kernel: devtmpfs: initialized Jul 7 09:03:03.965430 kernel: x86/mm: Memory block size: 128MB Jul 7 09:03:03.965443 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 09:03:03.965460 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 09:03:03.965473 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 09:03:03.965486 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 09:03:03.965498 kernel: audit: initializing netlink subsys (disabled) Jul 7 09:03:03.965511 kernel: audit: type=2000 audit(1751878980.917:1): state=initialized audit_enabled=0 res=1 Jul 7 09:03:03.965523 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 09:03:03.965536 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 09:03:03.965548 kernel: cpuidle: using governor menu Jul 7 09:03:03.965561 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 09:03:03.965577 kernel: dca service started, version 1.12.1 Jul 7 09:03:03.965590 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 09:03:03.965603 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 09:03:03.965616 kernel: PCI: Using configuration type 1 for base access Jul 7 09:03:03.965628 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 09:03:03.965641 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 09:03:03.965654 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 09:03:03.965666 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 09:03:03.965691 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 09:03:03.965707 kernel: ACPI: Added _OSI(Module Device) Jul 7 09:03:03.965719 kernel: ACPI: Added _OSI(Processor Device) Jul 7 09:03:03.965731 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 09:03:03.965756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 09:03:03.965769 kernel: ACPI: Interpreter enabled Jul 7 09:03:03.965781 kernel: ACPI: PM: (supports S0 S5) Jul 7 09:03:03.965794 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 09:03:03.965806 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 09:03:03.965819 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 09:03:03.965836 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 09:03:03.965848 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 09:03:03.966155 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 09:03:03.966321 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 09:03:03.966477 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 09:03:03.966497 kernel: PCI host bridge to bus 0000:00 Jul 7 09:03:03.966667 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 09:03:03.966831 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 09:03:03.967752 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 09:03:03.967905 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 7 09:03:03.968105 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 09:03:03.968253 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 7 09:03:03.968398 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 09:03:03.968589 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 09:03:03.968798 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jul 7 09:03:03.969840 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jul 7 09:03:03.970036 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jul 7 09:03:03.970213 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jul 7 09:03:03.970373 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 09:03:03.970552 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.970718 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jul 7 09:03:03.970876 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 09:03:03.971206 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 09:03:03.971366 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:03:03.971567 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.971725 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jul 7 09:03:03.971881 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 09:03:03.972161 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 09:03:03.972321 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 09:03:03.972498 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.972655 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jul 7 09:03:03.972811 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 09:03:03.973004 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 09:03:03.973175 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 09:03:03.973356 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.973513 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jul 7 09:03:03.973668 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 09:03:03.973824 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 09:03:03.973999 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 09:03:03.974188 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.974344 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jul 7 09:03:03.974506 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 09:03:03.974669 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 09:03:03.974818 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 09:03:03.975026 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.975209 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jul 7 09:03:03.975365 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 09:03:03.976688 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 09:03:03.976872 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 09:03:03.977087 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.977251 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jul 7 09:03:03.977408 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 09:03:03.977563 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 09:03:03.977719 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 09:03:03.977889 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:03:03.978105 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jul 7 09:03:03.978268 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 09:03:03.978426 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 09:03:03.978584 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 09:03:03.978752 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 09:03:03.980053 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jul 7 09:03:03.980233 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jul 7 09:03:03.980405 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jul 7 09:03:03.980567 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jul 7 09:03:03.980764 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 09:03:03.980921 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jul 7 09:03:03.981139 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jul 7 09:03:03.981297 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jul 7 09:03:03.981463 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 09:03:03.981627 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 09:03:03.981791 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 09:03:03.986011 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jul 7 09:03:03.986202 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jul 7 09:03:03.986382 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 09:03:03.986543 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 09:03:03.986736 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 7 09:03:03.986907 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jul 7 09:03:03.988131 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 09:03:03.988299 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 09:03:03.988458 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 09:03:03.988629 kernel: pci_bus 0000:02: extended config space not accessible Jul 7 09:03:03.988820 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jul 7 09:03:03.993050 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jul 7 09:03:03.993242 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 09:03:03.993432 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jul 7 09:03:03.993600 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jul 7 09:03:03.993765 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 09:03:03.993971 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jul 7 09:03:03.994159 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jul 7 09:03:03.994327 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 09:03:03.994488 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 09:03:03.994653 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 09:03:03.996570 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 09:03:03.996752 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 09:03:03.996937 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 09:03:03.996978 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 09:03:03.997000 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 09:03:03.997013 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 09:03:03.997038 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 09:03:03.997057 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 09:03:03.997069 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 09:03:03.997095 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 09:03:03.997116 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 09:03:03.997129 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 09:03:03.997148 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 09:03:03.997161 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 09:03:03.997173 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 09:03:03.997186 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 09:03:03.997199 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 09:03:03.997211 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 09:03:03.997224 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 09:03:03.997237 kernel: iommu: Default domain type: Translated Jul 7 09:03:03.997249 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 09:03:03.997267 kernel: PCI: Using ACPI for IRQ routing Jul 7 09:03:03.997280 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 09:03:03.997292 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 09:03:03.997305 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 7 09:03:03.997463 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 09:03:03.997620 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 09:03:03.997776 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 09:03:03.997796 kernel: vgaarb: loaded Jul 7 09:03:03.997816 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 09:03:03.997829 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 09:03:03.997849 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 09:03:03.997861 kernel: pnp: PnP ACPI init Jul 7 09:03:03.998467 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 09:03:03.998490 kernel: pnp: PnP ACPI: found 5 devices Jul 7 09:03:03.998504 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 09:03:03.998517 kernel: NET: Registered PF_INET protocol family Jul 7 09:03:03.998530 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 09:03:03.998550 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 09:03:03.998563 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 09:03:03.998576 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 09:03:03.998588 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 09:03:03.998601 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 09:03:03.998614 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 09:03:03.998627 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 09:03:03.998639 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 09:03:03.998656 kernel: NET: Registered PF_XDP protocol family Jul 7 09:03:03.999711 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 7 09:03:03.999876 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 09:03:04.000083 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 09:03:04.000245 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 7 09:03:04.000402 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 7 09:03:04.000559 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 09:03:04.000714 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 09:03:04.000877 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 09:03:04.001062 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jul 7 09:03:04.001242 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jul 7 09:03:04.001400 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jul 7 09:03:04.001556 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jul 7 09:03:04.001714 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jul 7 09:03:04.001869 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jul 7 09:03:04.004123 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jul 7 09:03:04.004310 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jul 7 09:03:04.004490 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 09:03:04.004661 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 09:03:04.004842 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 09:03:04.005057 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 7 09:03:04.005229 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 09:03:04.005388 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:03:04.005545 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 09:03:04.005701 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 7 09:03:04.005865 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 09:03:04.006057 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 09:03:04.006231 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 09:03:04.006387 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 7 09:03:04.006544 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 09:03:04.006700 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 09:03:04.006856 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 09:03:04.007046 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 7 09:03:04.007226 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 09:03:04.007384 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 09:03:04.007541 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 09:03:04.007715 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 7 09:03:04.007885 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 09:03:04.008061 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 09:03:04.008232 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 09:03:04.008388 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 7 09:03:04.008545 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 09:03:04.008702 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 09:03:04.008859 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 09:03:04.009047 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 7 09:03:04.009225 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 09:03:04.009382 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 09:03:04.009538 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 09:03:04.009694 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 7 09:03:04.009850 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 09:03:04.010040 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 09:03:04.010204 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 09:03:04.010348 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 09:03:04.010491 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 09:03:04.010641 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 7 09:03:04.010783 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 09:03:04.010951 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 7 09:03:04.011135 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 7 09:03:04.011285 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 7 09:03:04.011434 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:03:04.011592 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 7 09:03:04.011766 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 7 09:03:04.011943 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 7 09:03:04.012108 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 09:03:04.012267 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 7 09:03:04.012415 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 7 09:03:04.012562 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 09:03:04.012719 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 7 09:03:04.012874 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 7 09:03:04.013056 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 09:03:04.013231 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 7 09:03:04.013381 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 7 09:03:04.013528 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 09:03:04.013705 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 7 09:03:04.013853 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 7 09:03:04.014029 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 09:03:04.014200 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 7 09:03:04.014349 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 7 09:03:04.014496 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 09:03:04.014655 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 7 09:03:04.014802 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 7 09:03:04.014990 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 09:03:04.015012 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 09:03:04.015026 kernel: PCI: CLS 0 bytes, default 64 Jul 7 09:03:04.015040 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 09:03:04.015054 kernel: software IO TLB: mapped [mem 0x0000000071000000-0x0000000075000000] (64MB) Jul 7 09:03:04.015067 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 09:03:04.015092 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 7 09:03:04.015106 kernel: Initialise system trusted keyrings Jul 7 09:03:04.015119 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 09:03:04.015140 kernel: Key type asymmetric registered Jul 7 09:03:04.015153 kernel: Asymmetric key parser 'x509' registered Jul 7 09:03:04.015166 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 09:03:04.015179 kernel: io scheduler mq-deadline registered Jul 7 09:03:04.015192 kernel: io scheduler kyber registered Jul 7 09:03:04.015205 kernel: io scheduler bfq registered Jul 7 09:03:04.015362 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 7 09:03:04.015530 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 7 09:03:04.015694 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.015850 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 7 09:03:04.016026 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 7 09:03:04.016197 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.016354 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 7 09:03:04.016533 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 7 09:03:04.016708 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.016868 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 7 09:03:04.017098 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 7 09:03:04.017267 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.017448 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 7 09:03:04.017594 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 7 09:03:04.017771 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.017935 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 7 09:03:04.018147 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 7 09:03:04.018306 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.018462 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 7 09:03:04.018619 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 7 09:03:04.018784 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.018969 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 7 09:03:04.019147 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 7 09:03:04.019308 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:03:04.019329 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 09:03:04.019343 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 09:03:04.019357 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 09:03:04.019377 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 09:03:04.019391 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 09:03:04.019405 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 09:03:04.019418 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 09:03:04.019431 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 09:03:04.019607 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 09:03:04.019629 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 09:03:04.019773 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 09:03:04.019958 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T09:03:03 UTC (1751878983) Jul 7 09:03:04.020154 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 7 09:03:04.020174 kernel: intel_pstate: CPU model not supported Jul 7 09:03:04.020187 kernel: NET: Registered PF_INET6 protocol family Jul 7 09:03:04.020201 kernel: Segment Routing with IPv6 Jul 7 09:03:04.020214 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 09:03:04.020227 kernel: NET: Registered PF_PACKET protocol family Jul 7 09:03:04.020240 kernel: Key type dns_resolver registered Jul 7 09:03:04.020253 kernel: IPI shorthand broadcast: enabled Jul 7 09:03:04.020276 kernel: sched_clock: Marking stable (3538064627, 230288969)->(3890545680, -122192084) Jul 7 09:03:04.020290 kernel: registered taskstats version 1 Jul 7 09:03:04.020304 kernel: Loading compiled-in X.509 certificates Jul 7 09:03:04.020317 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 09:03:04.020330 kernel: Demotion targets for Node 0: null Jul 7 09:03:04.020343 kernel: Key type .fscrypt registered Jul 7 09:03:04.020356 kernel: Key type fscrypt-provisioning registered Jul 7 09:03:04.020369 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 09:03:04.020382 kernel: ima: Allocated hash algorithm: sha1 Jul 7 09:03:04.020400 kernel: ima: No architecture policies found Jul 7 09:03:04.020418 kernel: clk: Disabling unused clocks Jul 7 09:03:04.020431 kernel: Warning: unable to open an initial console. Jul 7 09:03:04.020445 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 09:03:04.020458 kernel: Write protecting the kernel read-only data: 24576k Jul 7 09:03:04.020471 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 09:03:04.020484 kernel: Run /init as init process Jul 7 09:03:04.020517 kernel: with arguments: Jul 7 09:03:04.020530 kernel: /init Jul 7 09:03:04.020547 kernel: with environment: Jul 7 09:03:04.020579 kernel: HOME=/ Jul 7 09:03:04.020592 kernel: TERM=linux Jul 7 09:03:04.020605 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 09:03:04.020629 systemd[1]: Successfully made /usr/ read-only. Jul 7 09:03:04.020648 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 09:03:04.020663 systemd[1]: Detected virtualization kvm. Jul 7 09:03:04.020683 systemd[1]: Detected architecture x86-64. Jul 7 09:03:04.020697 systemd[1]: Running in initrd. Jul 7 09:03:04.020711 systemd[1]: No hostname configured, using default hostname. Jul 7 09:03:04.020726 systemd[1]: Hostname set to . Jul 7 09:03:04.020746 systemd[1]: Initializing machine ID from VM UUID. Jul 7 09:03:04.020760 systemd[1]: Queued start job for default target initrd.target. Jul 7 09:03:04.020774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 09:03:04.020788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 09:03:04.020807 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 09:03:04.020822 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 09:03:04.020836 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 09:03:04.020851 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 09:03:04.020867 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 09:03:04.020881 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 09:03:04.020895 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 09:03:04.020927 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 09:03:04.020944 systemd[1]: Reached target paths.target - Path Units. Jul 7 09:03:04.020958 systemd[1]: Reached target slices.target - Slice Units. Jul 7 09:03:04.020972 systemd[1]: Reached target swap.target - Swaps. Jul 7 09:03:04.020986 systemd[1]: Reached target timers.target - Timer Units. Jul 7 09:03:04.021005 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 09:03:04.021019 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 09:03:04.021033 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 09:03:04.021047 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 09:03:04.021083 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 09:03:04.021099 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 09:03:04.021113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 09:03:04.021130 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 09:03:04.021144 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 09:03:04.021158 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 09:03:04.021173 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 09:03:04.021187 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 09:03:04.021207 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 09:03:04.021221 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 09:03:04.021240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 09:03:04.021254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:03:04.021268 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 09:03:04.021283 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 09:03:04.021302 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 09:03:04.021359 systemd-journald[230]: Collecting audit messages is disabled. Jul 7 09:03:04.021393 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 09:03:04.021414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 09:03:04.021428 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 09:03:04.021443 kernel: Bridge firewalling registered Jul 7 09:03:04.021457 systemd-journald[230]: Journal started Jul 7 09:03:04.021488 systemd-journald[230]: Runtime Journal (/run/log/journal/8b1d18cfdb1041f19a4603844c703b36) is 4.7M, max 38.2M, 33.4M free. Jul 7 09:03:03.959702 systemd-modules-load[231]: Inserted module 'overlay' Jul 7 09:03:04.072864 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 09:03:04.004412 systemd-modules-load[231]: Inserted module 'br_netfilter' Jul 7 09:03:04.075225 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 09:03:04.076254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:03:04.081299 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 09:03:04.084092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:03:04.086169 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 09:03:04.089091 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 09:03:04.109373 systemd-tmpfiles[253]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 09:03:04.113029 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:03:04.118817 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 09:03:04.119981 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 09:03:04.127123 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 09:03:04.129327 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 09:03:04.132059 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 09:03:04.160110 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 09:03:04.185084 systemd-resolved[268]: Positive Trust Anchors: Jul 7 09:03:04.185982 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 09:03:04.186028 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 09:03:04.194272 systemd-resolved[268]: Defaulting to hostname 'linux'. Jul 7 09:03:04.197620 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 09:03:04.198422 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 09:03:04.269997 kernel: SCSI subsystem initialized Jul 7 09:03:04.281985 kernel: Loading iSCSI transport class v2.0-870. Jul 7 09:03:04.294952 kernel: iscsi: registered transport (tcp) Jul 7 09:03:04.322314 kernel: iscsi: registered transport (qla4xxx) Jul 7 09:03:04.322384 kernel: QLogic iSCSI HBA Driver Jul 7 09:03:04.348676 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 09:03:04.365726 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 09:03:04.369049 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 09:03:04.428160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 09:03:04.430763 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 09:03:04.489038 kernel: raid6: sse2x4 gen() 13029 MB/s Jul 7 09:03:04.506977 kernel: raid6: sse2x2 gen() 9389 MB/s Jul 7 09:03:04.525672 kernel: raid6: sse2x1 gen() 9536 MB/s Jul 7 09:03:04.525709 kernel: raid6: using algorithm sse2x4 gen() 13029 MB/s Jul 7 09:03:04.544668 kernel: raid6: .... xor() 7444 MB/s, rmw enabled Jul 7 09:03:04.544759 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 09:03:04.571000 kernel: xor: automatically using best checksumming function avx Jul 7 09:03:04.763989 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 09:03:04.772781 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 09:03:04.776033 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 09:03:04.808203 systemd-udevd[480]: Using default interface naming scheme 'v255'. Jul 7 09:03:04.817657 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 09:03:04.821750 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 09:03:04.848464 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Jul 7 09:03:04.880934 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 09:03:04.884157 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 09:03:05.006745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 09:03:05.010670 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 09:03:05.126005 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jul 7 09:03:05.146941 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 7 09:03:05.165970 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 09:03:05.172951 kernel: ACPI: bus type USB registered Jul 7 09:03:05.177013 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 09:03:05.188073 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 09:03:05.188117 kernel: GPT:17805311 != 125829119 Jul 7 09:03:05.188136 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 09:03:05.188153 kernel: GPT:17805311 != 125829119 Jul 7 09:03:05.188170 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 09:03:05.188186 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:03:05.194941 kernel: usbcore: registered new interface driver usbfs Jul 7 09:03:05.203075 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 09:03:05.203260 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:03:05.212700 kernel: usbcore: registered new interface driver hub Jul 7 09:03:05.212728 kernel: usbcore: registered new device driver usb Jul 7 09:03:05.211404 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:03:05.217177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:03:05.219214 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 09:03:05.229949 kernel: AES CTR mode by8 optimization enabled Jul 7 09:03:05.229987 kernel: libata version 3.00 loaded. Jul 7 09:03:05.260956 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 09:03:05.261278 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 09:03:05.280862 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 09:03:05.281164 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 09:03:05.281362 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 09:03:05.300225 kernel: scsi host0: ahci Jul 7 09:03:05.300495 kernel: scsi host1: ahci Jul 7 09:03:05.303540 kernel: scsi host2: ahci Jul 7 09:03:05.307497 kernel: scsi host3: ahci Jul 7 09:03:05.309941 kernel: scsi host4: ahci Jul 7 09:03:05.312939 kernel: scsi host5: ahci Jul 7 09:03:05.314935 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 0 Jul 7 09:03:05.314977 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 0 Jul 7 09:03:05.314996 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 0 Jul 7 09:03:05.315160 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 0 Jul 7 09:03:05.315184 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 0 Jul 7 09:03:05.315202 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 0 Jul 7 09:03:05.343219 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 09:03:05.411113 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:03:05.433371 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 09:03:05.443874 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 09:03:05.444712 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 09:03:05.460448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 09:03:05.462450 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 09:03:05.480976 disk-uuid[633]: Primary Header is updated. Jul 7 09:03:05.480976 disk-uuid[633]: Secondary Entries is updated. Jul 7 09:03:05.480976 disk-uuid[633]: Secondary Header is updated. Jul 7 09:03:05.485999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:03:05.492951 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:03:05.626939 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 09:03:05.627000 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 09:03:05.627974 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 09:03:05.633886 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 09:03:05.633944 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 09:03:05.633987 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 09:03:05.664370 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 09:03:05.664663 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 09:03:05.667942 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 09:03:05.672358 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 09:03:05.672586 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 09:03:05.674516 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 09:03:05.677437 kernel: hub 1-0:1.0: USB hub found Jul 7 09:03:05.677693 kernel: hub 1-0:1.0: 4 ports detected Jul 7 09:03:05.681862 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 09:03:05.682140 kernel: hub 2-0:1.0: USB hub found Jul 7 09:03:05.683940 kernel: hub 2-0:1.0: 4 ports detected Jul 7 09:03:05.704958 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 09:03:05.706370 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 09:03:05.707462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 09:03:05.709105 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 09:03:05.711583 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 09:03:05.751955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 09:03:05.915987 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 09:03:06.056943 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 09:03:06.064118 kernel: usbcore: registered new interface driver usbhid Jul 7 09:03:06.064191 kernel: usbhid: USB HID core driver Jul 7 09:03:06.071663 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jul 7 09:03:06.071702 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 7 09:03:06.496998 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:03:06.497085 disk-uuid[634]: The operation has completed successfully. Jul 7 09:03:06.556688 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 09:03:06.556870 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 09:03:06.601421 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 09:03:06.616952 sh[661]: Success Jul 7 09:03:06.641370 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 09:03:06.641450 kernel: device-mapper: uevent: version 1.0.3 Jul 7 09:03:06.643040 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 09:03:06.655981 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jul 7 09:03:06.717115 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 09:03:06.726048 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 09:03:06.729977 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 09:03:06.756948 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 09:03:06.761446 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (673) Jul 7 09:03:06.761491 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 09:03:06.765011 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:03:06.765051 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 09:03:06.777800 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 09:03:06.778762 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 09:03:06.780083 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 09:03:06.781098 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 09:03:06.785098 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 09:03:06.813949 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (706) Jul 7 09:03:06.820790 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:03:06.820832 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:03:06.820861 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 09:03:06.833953 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:03:06.836721 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 09:03:06.841116 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 09:03:06.931540 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 09:03:06.936182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 09:03:06.992347 systemd-networkd[843]: lo: Link UP Jul 7 09:03:06.992362 systemd-networkd[843]: lo: Gained carrier Jul 7 09:03:06.994749 systemd-networkd[843]: Enumeration completed Jul 7 09:03:06.996063 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:03:06.996071 systemd-networkd[843]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 09:03:06.997815 systemd-networkd[843]: eth0: Link UP Jul 7 09:03:06.997822 systemd-networkd[843]: eth0: Gained carrier Jul 7 09:03:06.997835 systemd-networkd[843]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:03:06.997927 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 09:03:06.999877 systemd[1]: Reached target network.target - Network. Jul 7 09:03:07.031986 systemd-networkd[843]: eth0: DHCPv4 address 10.230.11.74/30, gateway 10.230.11.73 acquired from 10.230.11.73 Jul 7 09:03:07.065753 ignition[765]: Ignition 2.21.0 Jul 7 09:03:07.065788 ignition[765]: Stage: fetch-offline Jul 7 09:03:07.065894 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:03:07.065931 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:03:07.068825 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 09:03:07.066172 ignition[765]: parsed url from cmdline: "" Jul 7 09:03:07.066179 ignition[765]: no config URL provided Jul 7 09:03:07.066190 ignition[765]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 09:03:07.066206 ignition[765]: no config at "/usr/lib/ignition/user.ign" Jul 7 09:03:07.066228 ignition[765]: failed to fetch config: resource requires networking Jul 7 09:03:07.074155 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 09:03:07.066548 ignition[765]: Ignition finished successfully Jul 7 09:03:07.111178 ignition[853]: Ignition 2.21.0 Jul 7 09:03:07.112497 ignition[853]: Stage: fetch Jul 7 09:03:07.112803 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:03:07.112822 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:03:07.112978 ignition[853]: parsed url from cmdline: "" Jul 7 09:03:07.112985 ignition[853]: no config URL provided Jul 7 09:03:07.113008 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 09:03:07.113033 ignition[853]: no config at "/usr/lib/ignition/user.ign" Jul 7 09:03:07.113237 ignition[853]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 09:03:07.113291 ignition[853]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 09:03:07.113422 ignition[853]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 09:03:07.134862 ignition[853]: GET result: OK Jul 7 09:03:07.135106 ignition[853]: parsing config with SHA512: 60dd4d17eb3e61cf9acafec0a0a65bfef3f6bcf48553aa1b54f9ab745f1570267c39e9de2cb50a606faefda529b5ff158eed04b976310933391b9a89494e7d5e Jul 7 09:03:07.140857 unknown[853]: fetched base config from "system" Jul 7 09:03:07.140880 unknown[853]: fetched base config from "system" Jul 7 09:03:07.141412 ignition[853]: fetch: fetch complete Jul 7 09:03:07.140890 unknown[853]: fetched user config from "openstack" Jul 7 09:03:07.141421 ignition[853]: fetch: fetch passed Jul 7 09:03:07.144604 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 09:03:07.141508 ignition[853]: Ignition finished successfully Jul 7 09:03:07.148121 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 09:03:07.198724 ignition[859]: Ignition 2.21.0 Jul 7 09:03:07.198751 ignition[859]: Stage: kargs Jul 7 09:03:07.198983 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:03:07.199017 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:03:07.200016 ignition[859]: kargs: kargs passed Jul 7 09:03:07.201936 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 09:03:07.200095 ignition[859]: Ignition finished successfully Jul 7 09:03:07.205467 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 09:03:07.243590 ignition[865]: Ignition 2.21.0 Jul 7 09:03:07.243623 ignition[865]: Stage: disks Jul 7 09:03:07.243860 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:03:07.243879 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:03:07.246177 ignition[865]: disks: disks passed Jul 7 09:03:07.248306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 09:03:07.246280 ignition[865]: Ignition finished successfully Jul 7 09:03:07.250446 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 09:03:07.251721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 09:03:07.253203 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 09:03:07.254741 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 09:03:07.256353 systemd[1]: Reached target basic.target - Basic System. Jul 7 09:03:07.260117 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 09:03:07.294619 systemd-fsck[873]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 7 09:03:07.298597 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 09:03:07.301898 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 09:03:07.440950 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 09:03:07.442722 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 09:03:07.445176 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 09:03:07.447809 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 09:03:07.451041 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 09:03:07.453421 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 09:03:07.460160 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 09:03:07.462619 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 09:03:07.462693 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 09:03:07.470165 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 09:03:07.476952 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jul 7 09:03:07.477353 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 09:03:07.493541 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:03:07.493596 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:03:07.493615 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 09:03:07.500761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 09:03:07.558001 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 09:03:07.562152 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:07.569513 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 7 09:03:07.579542 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 09:03:07.586323 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 09:03:07.693184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 09:03:07.695859 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 09:03:07.697487 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 09:03:07.723943 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:03:07.744081 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 09:03:07.754081 ignition[1000]: INFO : Ignition 2.21.0 Jul 7 09:03:07.754081 ignition[1000]: INFO : Stage: mount Jul 7 09:03:07.756607 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 09:03:07.756607 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:03:07.756607 ignition[1000]: INFO : mount: mount passed Jul 7 09:03:07.756607 ignition[1000]: INFO : Ignition finished successfully Jul 7 09:03:07.760292 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 09:03:07.760947 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 09:03:08.394166 systemd-networkd[843]: eth0: Gained IPv6LL Jul 7 09:03:08.589996 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:09.901255 systemd-networkd[843]: eth0: Ignoring DHCPv6 address 2a02:1348:179:82d2:24:19ff:fee6:b4a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:82d2:24:19ff:fee6:b4a/64 assigned by NDisc. Jul 7 09:03:09.901272 systemd-networkd[843]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 09:03:10.599960 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:14.607001 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:14.614762 coreos-metadata[883]: Jul 07 09:03:14.614 WARN failed to locate config-drive, using the metadata service API instead Jul 7 09:03:14.639779 coreos-metadata[883]: Jul 07 09:03:14.639 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 09:03:14.653058 coreos-metadata[883]: Jul 07 09:03:14.652 INFO Fetch successful Jul 7 09:03:14.654274 coreos-metadata[883]: Jul 07 09:03:14.654 INFO wrote hostname srv-djpnf.gb1.brightbox.com to /sysroot/etc/hostname Jul 7 09:03:14.657122 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 09:03:14.657320 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 09:03:14.661179 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 09:03:14.698810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 09:03:14.726946 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1016) Jul 7 09:03:14.727011 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:03:14.730233 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:03:14.730274 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 09:03:14.737592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 09:03:14.779406 ignition[1034]: INFO : Ignition 2.21.0 Jul 7 09:03:14.779406 ignition[1034]: INFO : Stage: files Jul 7 09:03:14.782235 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 09:03:14.782235 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:03:14.782235 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Jul 7 09:03:14.784936 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 09:03:14.784936 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 09:03:14.792723 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 09:03:14.792723 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 09:03:14.792723 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 09:03:14.792723 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 09:03:14.792723 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 09:03:14.788313 unknown[1034]: wrote ssh authorized keys file for user: core Jul 7 09:03:15.075834 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 09:03:37.060227 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 09:03:37.062621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 09:03:37.062621 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 09:03:37.738391 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 09:03:38.119431 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 09:03:38.121152 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 09:03:38.130248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 09:03:38.130248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 09:03:38.130248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 09:03:38.130248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 09:03:38.130248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 09:03:38.130248 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 09:03:38.754470 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 09:03:39.829543 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 09:03:39.829543 ignition[1034]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 09:03:39.833150 ignition[1034]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 09:03:39.834525 ignition[1034]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 09:03:39.834525 ignition[1034]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 09:03:39.838537 ignition[1034]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 09:03:39.838537 ignition[1034]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 09:03:39.838537 ignition[1034]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 09:03:39.838537 ignition[1034]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 09:03:39.838537 ignition[1034]: INFO : files: files passed Jul 7 09:03:39.838537 ignition[1034]: INFO : Ignition finished successfully Jul 7 09:03:39.839139 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 09:03:39.846193 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 09:03:39.851167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 09:03:39.869602 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 09:03:39.870571 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 09:03:39.878688 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 09:03:39.880266 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 09:03:39.881404 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 09:03:39.883278 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 09:03:39.884568 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 09:03:39.887143 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 09:03:39.951851 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 09:03:39.952087 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 09:03:39.954797 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 09:03:39.955647 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 09:03:39.957536 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 09:03:39.959149 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 09:03:39.990598 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 09:03:39.995202 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 09:03:40.015050 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 09:03:40.016046 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 09:03:40.017926 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 09:03:40.019665 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 09:03:40.019987 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 09:03:40.021733 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 09:03:40.022738 systemd[1]: Stopped target basic.target - Basic System. Jul 7 09:03:40.024260 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 09:03:40.025790 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 09:03:40.027295 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 09:03:40.029131 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 09:03:40.031763 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 09:03:40.033641 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 09:03:40.035270 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 09:03:40.037126 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 09:03:40.038508 systemd[1]: Stopped target swap.target - Swaps. Jul 7 09:03:40.039855 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 09:03:40.040188 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 09:03:40.041633 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 09:03:40.042658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 09:03:40.044147 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 09:03:40.044600 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 09:03:40.051580 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 09:03:40.051825 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 09:03:40.053666 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 09:03:40.053862 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 09:03:40.055791 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 09:03:40.056007 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 09:03:40.058337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 09:03:40.059982 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 09:03:40.060195 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 09:03:40.073127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 09:03:40.074618 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 09:03:40.077216 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 09:03:40.079172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 09:03:40.080166 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 09:03:40.089934 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 09:03:40.093601 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 09:03:40.100533 ignition[1087]: INFO : Ignition 2.21.0 Jul 7 09:03:40.100533 ignition[1087]: INFO : Stage: umount Jul 7 09:03:40.104678 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 09:03:40.104678 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:03:40.104678 ignition[1087]: INFO : umount: umount passed Jul 7 09:03:40.104678 ignition[1087]: INFO : Ignition finished successfully Jul 7 09:03:40.107237 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 09:03:40.107447 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 09:03:40.112143 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 09:03:40.112240 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 09:03:40.113743 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 09:03:40.113825 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 09:03:40.116360 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 09:03:40.116438 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 09:03:40.117764 systemd[1]: Stopped target network.target - Network. Jul 7 09:03:40.119171 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 09:03:40.119249 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 09:03:40.123165 systemd[1]: Stopped target paths.target - Path Units. Jul 7 09:03:40.124482 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 09:03:40.124616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 09:03:40.125980 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 09:03:40.129652 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 09:03:40.130353 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 09:03:40.130438 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 09:03:40.131290 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 09:03:40.131356 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 09:03:40.132883 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 09:03:40.133003 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 09:03:40.134863 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 09:03:40.134970 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 09:03:40.136468 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 09:03:40.139227 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 09:03:40.141224 systemd-networkd[843]: eth0: DHCPv6 lease lost Jul 7 09:03:40.142845 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 09:03:40.143861 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 09:03:40.145856 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 09:03:40.148758 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 09:03:40.148999 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 09:03:40.153473 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 09:03:40.153769 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 09:03:40.153973 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 09:03:40.156195 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 09:03:40.158182 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 09:03:40.160052 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 09:03:40.160135 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 09:03:40.161424 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 09:03:40.161498 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 09:03:40.165014 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 09:03:40.166271 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 09:03:40.166339 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 09:03:40.168417 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 09:03:40.168482 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:03:40.173886 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 09:03:40.173989 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 09:03:40.174938 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 09:03:40.175007 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 09:03:40.178762 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 09:03:40.181719 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 09:03:40.181811 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 09:03:40.189699 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 09:03:40.196347 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 09:03:40.198737 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 09:03:40.198897 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 09:03:40.201041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 09:03:40.201174 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 09:03:40.202722 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 09:03:40.202779 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 09:03:40.204224 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 09:03:40.204303 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 09:03:40.206441 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 09:03:40.206504 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 09:03:40.207847 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 09:03:40.207938 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 09:03:40.209838 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 09:03:40.212272 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 09:03:40.212347 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 09:03:40.216817 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 09:03:40.216882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 09:03:40.223893 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 09:03:40.223995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:03:40.227728 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 09:03:40.227812 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 09:03:40.227888 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 09:03:40.238310 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 09:03:40.238486 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 09:03:40.240351 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 09:03:40.243668 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 09:03:40.271577 systemd[1]: Switching root. Jul 7 09:03:40.309799 systemd-journald[230]: Journal stopped Jul 7 09:03:42.053150 systemd-journald[230]: Received SIGTERM from PID 1 (systemd). Jul 7 09:03:42.053336 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 09:03:42.053374 kernel: SELinux: policy capability open_perms=1 Jul 7 09:03:42.053399 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 09:03:42.053427 kernel: SELinux: policy capability always_check_network=0 Jul 7 09:03:42.053450 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 09:03:42.053490 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 09:03:42.053509 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 09:03:42.053550 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 09:03:42.053580 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 09:03:42.053606 kernel: audit: type=1403 audit(1751879020.735:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 09:03:42.053696 systemd[1]: Successfully loaded SELinux policy in 60.594ms. Jul 7 09:03:42.053753 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.109ms. Jul 7 09:03:42.053782 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 09:03:42.053810 systemd[1]: Detected virtualization kvm. Jul 7 09:03:42.053835 systemd[1]: Detected architecture x86-64. Jul 7 09:03:42.053859 systemd[1]: Detected first boot. Jul 7 09:03:42.053897 systemd[1]: Hostname set to . Jul 7 09:03:42.054988 systemd[1]: Initializing machine ID from VM UUID. Jul 7 09:03:42.055025 kernel: Guest personality initialized and is inactive Jul 7 09:03:42.055057 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 09:03:42.055077 zram_generator::config[1130]: No configuration found. Jul 7 09:03:42.055109 kernel: Initialized host personality Jul 7 09:03:42.055128 kernel: NET: Registered PF_VSOCK protocol family Jul 7 09:03:42.055147 systemd[1]: Populated /etc with preset unit settings. Jul 7 09:03:42.055187 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 09:03:42.055210 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 09:03:42.055238 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 09:03:42.055258 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 09:03:42.055278 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 09:03:42.055298 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 09:03:42.055317 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 09:03:42.055343 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 09:03:42.055364 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 09:03:42.055398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 09:03:42.055434 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 09:03:42.055473 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 09:03:42.055509 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 09:03:42.055531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 09:03:42.055551 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 09:03:42.055584 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 09:03:42.055613 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 09:03:42.055635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 09:03:42.055655 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 09:03:42.055674 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 09:03:42.055708 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 09:03:42.055775 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 09:03:42.055810 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 09:03:42.055854 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 09:03:42.055889 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 09:03:42.059352 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 09:03:42.059393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 09:03:42.059421 systemd[1]: Reached target slices.target - Slice Units. Jul 7 09:03:42.059442 systemd[1]: Reached target swap.target - Swaps. Jul 7 09:03:42.059462 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 09:03:42.059496 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 09:03:42.059518 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 09:03:42.059538 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 09:03:42.059564 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 09:03:42.059585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 09:03:42.059605 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 09:03:42.059625 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 09:03:42.059645 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 09:03:42.059664 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 09:03:42.059700 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:42.059727 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 09:03:42.059753 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 09:03:42.059793 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 09:03:42.059814 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 09:03:42.059834 systemd[1]: Reached target machines.target - Containers. Jul 7 09:03:42.059853 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 09:03:42.059878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:03:42.059910 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 09:03:42.060713 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 09:03:42.060737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 09:03:42.060766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 09:03:42.060787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 09:03:42.060807 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 09:03:42.060827 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 09:03:42.060866 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 09:03:42.060887 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 09:03:42.064955 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 09:03:42.064988 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 09:03:42.065009 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 09:03:42.065030 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:03:42.065063 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 09:03:42.065084 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 09:03:42.065104 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 09:03:42.065132 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 09:03:42.065173 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 09:03:42.065196 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 09:03:42.065230 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 09:03:42.065251 systemd[1]: Stopped verity-setup.service. Jul 7 09:03:42.065278 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:42.065308 kernel: loop: module loaded Jul 7 09:03:42.065340 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 09:03:42.065361 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 09:03:42.065387 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 09:03:42.065420 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 09:03:42.065442 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 09:03:42.065462 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 09:03:42.065482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 09:03:42.065501 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 09:03:42.065521 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 09:03:42.065541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 09:03:42.065560 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 09:03:42.065580 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 09:03:42.065979 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 09:03:42.066006 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 09:03:42.066042 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 09:03:42.066072 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 09:03:42.066094 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 09:03:42.066114 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 09:03:42.066136 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 09:03:42.066164 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 09:03:42.066184 kernel: ACPI: bus type drm_connector registered Jul 7 09:03:42.066216 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 09:03:42.066285 systemd-journald[1224]: Collecting audit messages is disabled. Jul 7 09:03:42.066352 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 09:03:42.066388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:03:42.066411 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 09:03:42.066432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 09:03:42.066452 kernel: fuse: init (API version 7.41) Jul 7 09:03:42.066473 systemd-journald[1224]: Journal started Jul 7 09:03:42.066519 systemd-journald[1224]: Runtime Journal (/run/log/journal/8b1d18cfdb1041f19a4603844c703b36) is 4.7M, max 38.2M, 33.4M free. Jul 7 09:03:42.079387 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 09:03:42.079459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 09:03:41.598925 systemd[1]: Queued start job for default target multi-user.target. Jul 7 09:03:41.623727 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 09:03:41.624613 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 09:03:42.086844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:03:42.105693 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 09:03:42.105758 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 09:03:42.109995 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 09:03:42.111175 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 09:03:42.112090 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 09:03:42.114187 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 09:03:42.114481 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 09:03:42.116440 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 09:03:42.119244 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 09:03:42.120208 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 09:03:42.122248 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 09:03:42.152235 kernel: loop0: detected capacity change from 0 to 8 Jul 7 09:03:42.175624 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 09:03:42.174168 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 09:03:42.174956 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 09:03:42.180570 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 09:03:42.189181 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 09:03:42.196449 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 09:03:42.203164 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 09:03:42.208122 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 09:03:42.224257 kernel: loop1: detected capacity change from 0 to 146240 Jul 7 09:03:42.245498 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:03:42.248443 systemd-journald[1224]: Time spent on flushing to /var/log/journal/8b1d18cfdb1041f19a4603844c703b36 is 53.447ms for 1171 entries. Jul 7 09:03:42.248443 systemd-journald[1224]: System Journal (/var/log/journal/8b1d18cfdb1041f19a4603844c703b36) is 8M, max 584.8M, 576.8M free. Jul 7 09:03:42.316067 systemd-journald[1224]: Received client request to flush runtime journal. Jul 7 09:03:42.316126 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 09:03:42.272068 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 09:03:42.320544 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 09:03:42.356122 kernel: loop3: detected capacity change from 0 to 224512 Jul 7 09:03:42.358016 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 09:03:42.364443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 09:03:42.411645 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 09:03:42.457999 kernel: loop4: detected capacity change from 0 to 8 Jul 7 09:03:42.462489 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 7 09:03:42.462516 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 7 09:03:42.467388 kernel: loop5: detected capacity change from 0 to 146240 Jul 7 09:03:42.480839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 09:03:42.521138 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 09:03:42.551892 kernel: loop7: detected capacity change from 0 to 224512 Jul 7 09:03:42.601938 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 09:03:42.603608 (sd-merge)[1290]: Merged extensions into '/usr'. Jul 7 09:03:42.613940 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 09:03:42.613981 systemd[1]: Reloading... Jul 7 09:03:42.781673 zram_generator::config[1317]: No configuration found. Jul 7 09:03:42.826957 ldconfig[1244]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 09:03:42.982031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:03:43.102906 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 09:03:43.104192 systemd[1]: Reloading finished in 489 ms. Jul 7 09:03:43.130616 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 09:03:43.131993 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 09:03:43.144516 systemd[1]: Starting ensure-sysext.service... Jul 7 09:03:43.147161 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 09:03:43.176761 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Jul 7 09:03:43.176789 systemd[1]: Reloading... Jul 7 09:03:43.209555 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 09:03:43.210069 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 09:03:43.210543 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 09:03:43.211205 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 09:03:43.213177 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 09:03:43.213669 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jul 7 09:03:43.213891 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jul 7 09:03:43.221741 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 09:03:43.221847 systemd-tmpfiles[1374]: Skipping /boot Jul 7 09:03:43.244151 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 09:03:43.244270 systemd-tmpfiles[1374]: Skipping /boot Jul 7 09:03:43.282976 zram_generator::config[1401]: No configuration found. Jul 7 09:03:43.430062 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:03:43.552641 systemd[1]: Reloading finished in 375 ms. Jul 7 09:03:43.575916 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 09:03:43.596462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 09:03:43.607337 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 09:03:43.612611 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 09:03:43.619551 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 09:03:43.625355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 09:03:43.632090 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 09:03:43.636767 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 09:03:43.642970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:43.643262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:03:43.648320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 09:03:43.653448 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 09:03:43.665971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 09:03:43.667774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:03:43.668042 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:03:43.668213 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:43.674487 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:43.674778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:03:43.678111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:03:43.678268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:03:43.682404 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 09:03:43.683976 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:43.690649 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:43.694075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:03:43.708406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 09:03:43.710190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:03:43.710370 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:03:43.710624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:03:43.713317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 09:03:43.714986 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 09:03:43.716389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 09:03:43.716642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 09:03:43.733488 systemd[1]: Finished ensure-sysext.service. Jul 7 09:03:43.736607 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 09:03:43.742623 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 09:03:43.749307 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 09:03:43.750526 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 09:03:43.753025 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 09:03:43.763098 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 09:03:43.768693 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 09:03:43.770733 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 09:03:43.771301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 09:03:43.773748 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 09:03:43.774139 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 09:03:43.779222 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 09:03:43.792709 augenrules[1499]: No rules Jul 7 09:03:43.793999 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 09:03:43.796030 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 09:03:43.812617 systemd-udevd[1464]: Using default interface naming scheme 'v255'. Jul 7 09:03:43.816582 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 09:03:43.832378 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 09:03:43.854763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 09:03:43.860241 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 09:03:44.062406 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 09:03:44.064069 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 09:03:44.083451 systemd-networkd[1513]: lo: Link UP Jul 7 09:03:44.084949 systemd-networkd[1513]: lo: Gained carrier Jul 7 09:03:44.089254 systemd-networkd[1513]: Enumeration completed Jul 7 09:03:44.089384 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 09:03:44.094035 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 09:03:44.097231 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 09:03:44.115238 systemd-resolved[1462]: Positive Trust Anchors: Jul 7 09:03:44.115262 systemd-resolved[1462]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 09:03:44.115304 systemd-resolved[1462]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 09:03:44.125159 systemd-resolved[1462]: Using system hostname 'srv-djpnf.gb1.brightbox.com'. Jul 7 09:03:44.129507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 09:03:44.131091 systemd[1]: Reached target network.target - Network. Jul 7 09:03:44.133020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 09:03:44.133776 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 09:03:44.134598 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 09:03:44.135464 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 09:03:44.137027 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 09:03:44.137998 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 09:03:44.139164 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 09:03:44.140974 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 09:03:44.141808 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 09:03:44.141867 systemd[1]: Reached target paths.target - Path Units. Jul 7 09:03:44.142622 systemd[1]: Reached target timers.target - Timer Units. Jul 7 09:03:44.144814 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 09:03:44.148849 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 09:03:44.157827 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 09:03:44.158899 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 09:03:44.160389 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 09:03:44.171091 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 09:03:44.172361 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 09:03:44.175223 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 09:03:44.176256 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 09:03:44.180621 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 09:03:44.183094 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 09:03:44.184090 systemd[1]: Reached target basic.target - Basic System. Jul 7 09:03:44.185005 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 09:03:44.185065 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 09:03:44.187648 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 09:03:44.194199 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 09:03:44.198206 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 09:03:44.212249 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 09:03:44.216497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 09:03:44.222984 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:44.221191 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 09:03:44.223019 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 09:03:44.227821 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 09:03:44.234816 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 09:03:44.241131 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 09:03:44.245214 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 09:03:44.247639 jq[1550]: false Jul 7 09:03:44.257602 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 09:03:44.263646 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 09:03:44.267523 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 09:03:44.268194 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 09:03:44.270193 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 09:03:44.278829 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jul 7 09:03:44.279154 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 09:03:44.281986 oslogin_cache_refresh[1554]: Refreshing passwd entry cache Jul 7 09:03:44.285331 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 09:03:44.286598 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 09:03:44.288150 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 09:03:44.290518 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 09:03:44.290848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 09:03:44.294565 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting users, quitting Jul 7 09:03:44.295866 oslogin_cache_refresh[1554]: Failure getting users, quitting Jul 7 09:03:44.297044 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 09:03:44.297044 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Refreshing group entry cache Jul 7 09:03:44.297044 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Failure getting groups, quitting Jul 7 09:03:44.297044 google_oslogin_nss_cache[1554]: oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 09:03:44.295899 oslogin_cache_refresh[1554]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 09:03:44.295987 oslogin_cache_refresh[1554]: Refreshing group entry cache Jul 7 09:03:44.296749 oslogin_cache_refresh[1554]: Failure getting groups, quitting Jul 7 09:03:44.296764 oslogin_cache_refresh[1554]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 09:03:44.302890 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 09:03:44.305013 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 09:03:44.345440 extend-filesystems[1552]: Found /dev/vda6 Jul 7 09:03:44.355896 extend-filesystems[1552]: Found /dev/vda9 Jul 7 09:03:44.356488 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 09:03:44.371953 extend-filesystems[1552]: Checking size of /dev/vda9 Jul 7 09:03:44.356799 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 09:03:44.379092 update_engine[1561]: I20250707 09:03:44.373546 1561 main.cc:92] Flatcar Update Engine starting Jul 7 09:03:44.379484 jq[1563]: true Jul 7 09:03:44.377402 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 09:03:44.396237 tar[1565]: linux-amd64/LICENSE Jul 7 09:03:44.396237 tar[1565]: linux-amd64/helm Jul 7 09:03:44.434704 dbus-daemon[1548]: [system] SELinux support is enabled Jul 7 09:03:44.434991 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 09:03:44.440547 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 09:03:44.440588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 09:03:44.442357 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 09:03:44.442387 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 09:03:44.454098 jq[1593]: true Jul 7 09:03:44.454240 extend-filesystems[1552]: Resized partition /dev/vda9 Jul 7 09:03:44.477820 extend-filesystems[1598]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 09:03:44.493658 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 7 09:03:44.495476 systemd[1]: Started update-engine.service - Update Engine. Jul 7 09:03:44.501003 update_engine[1561]: I20250707 09:03:44.496068 1561 update_check_scheduler.cc:74] Next update check in 4m8s Jul 7 09:03:44.510144 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 09:03:44.548486 systemd-networkd[1513]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:03:44.548498 systemd-networkd[1513]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 09:03:44.588279 dbus-daemon[1548]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1513 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 09:03:44.554649 systemd-networkd[1513]: eth0: Link UP Jul 7 09:03:44.556258 systemd-networkd[1513]: eth0: Gained carrier Jul 7 09:03:44.556330 systemd-networkd[1513]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:03:44.587880 systemd-networkd[1513]: eth0: DHCPv4 address 10.230.11.74/30, gateway 10.230.11.73 acquired from 10.230.11.73 Jul 7 09:03:44.607580 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Jul 7 09:03:44.608089 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 09:03:44.648392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 09:03:44.653112 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 09:03:44.660627 bash[1612]: Updated "/home/core/.ssh/authorized_keys" Jul 7 09:03:44.659038 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 09:03:44.673131 systemd[1]: Starting sshkeys.service... Jul 7 09:03:44.754030 systemd-logind[1560]: New seat seat0. Jul 7 09:03:44.755156 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 09:03:44.783228 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 09:03:44.798773 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 09:03:44.804642 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 09:03:44.880140 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 7 09:03:44.880008 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 09:03:44.890939 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:44.899898 extend-filesystems[1598]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 09:03:44.899898 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 7 09:03:44.899898 extend-filesystems[1598]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 7 09:03:44.899234 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 09:03:44.911611 extend-filesystems[1552]: Resized filesystem in /dev/vda9 Jul 7 09:03:44.902515 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 09:03:44.982568 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 09:03:44.986301 dbus-daemon[1548]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 09:03:44.987556 dbus-daemon[1548]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1617 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 09:03:44.997531 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 09:03:45.072269 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 09:03:45.148441 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 09:03:45.156694 containerd[1583]: time="2025-07-07T09:03:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 09:03:45.167947 containerd[1583]: time="2025-07-07T09:03:45.166316915Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 09:03:45.180861 polkitd[1637]: Started polkitd version 126 Jul 7 09:03:45.190211 kernel: ACPI: button: Power Button [PWRF] Jul 7 09:03:45.192020 polkitd[1637]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 09:03:45.192457 polkitd[1637]: Loading rules from directory /run/polkit-1/rules.d Jul 7 09:03:45.192531 polkitd[1637]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 09:03:45.192886 polkitd[1637]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 09:03:45.192960 polkitd[1637]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 09:03:45.193028 polkitd[1637]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 09:03:45.197795 polkitd[1637]: Finished loading, compiling and executing 2 rules Jul 7 09:03:45.199099 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 09:03:45.203177 dbus-daemon[1548]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 09:03:45.204825 containerd[1583]: time="2025-07-07T09:03:45.204505715Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="20.939µs" Jul 7 09:03:45.205074 polkitd[1637]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 09:03:45.209322 containerd[1583]: time="2025-07-07T09:03:45.208115828Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 09:03:45.209322 containerd[1583]: time="2025-07-07T09:03:45.208159266Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 09:03:45.209322 containerd[1583]: time="2025-07-07T09:03:45.208461183Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 09:03:45.209322 containerd[1583]: time="2025-07-07T09:03:45.208488754Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 09:03:45.209322 containerd[1583]: time="2025-07-07T09:03:45.208536045Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 09:03:45.209322 containerd[1583]: time="2025-07-07T09:03:45.208681706Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 09:03:45.209322 containerd[1583]: time="2025-07-07T09:03:45.208702082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 09:03:45.211294 containerd[1583]: time="2025-07-07T09:03:45.211258239Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 09:03:45.211664 containerd[1583]: time="2025-07-07T09:03:45.211616454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 09:03:45.213036 containerd[1583]: time="2025-07-07T09:03:45.212415034Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 09:03:45.213036 containerd[1583]: time="2025-07-07T09:03:45.212443978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 09:03:45.214492 containerd[1583]: time="2025-07-07T09:03:45.214461525Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 09:03:45.215830 containerd[1583]: time="2025-07-07T09:03:45.215787143Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 09:03:45.216296 containerd[1583]: time="2025-07-07T09:03:45.216265378Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 09:03:45.217335 containerd[1583]: time="2025-07-07T09:03:45.216376186Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 09:03:45.217335 containerd[1583]: time="2025-07-07T09:03:45.216442058Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 09:03:45.217335 containerd[1583]: time="2025-07-07T09:03:45.216747528Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 09:03:45.217335 containerd[1583]: time="2025-07-07T09:03:45.216841015Z" level=info msg="metadata content store policy set" policy=shared Jul 7 09:03:45.224209 containerd[1583]: time="2025-07-07T09:03:45.224176464Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 09:03:45.224602 containerd[1583]: time="2025-07-07T09:03:45.224573134Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 09:03:45.224748 containerd[1583]: time="2025-07-07T09:03:45.224720132Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 09:03:45.225113 containerd[1583]: time="2025-07-07T09:03:45.225083887Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225526136Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225557572Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225579066Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225598204Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225617713Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225636532Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225652874Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225686175Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 09:03:45.225977 containerd[1583]: time="2025-07-07T09:03:45.225891311Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227303018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227338058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227371005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227388463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227445836Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227467348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227499930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227540292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227561191Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227578775Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227697650Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227739329Z" level=info msg="Start snapshots syncer" Jul 7 09:03:45.228938 containerd[1583]: time="2025-07-07T09:03:45.227803744Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 09:03:45.230795 containerd[1583]: time="2025-07-07T09:03:45.230741458Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 09:03:45.232003 containerd[1583]: time="2025-07-07T09:03:45.231962791Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 09:03:45.232230 containerd[1583]: time="2025-07-07T09:03:45.232201026Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233167803Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233224347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233247867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233277789Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233304183Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233323296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233352750Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233392648Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233426645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233444999Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233517130Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233541868Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 09:03:45.234469 containerd[1583]: time="2025-07-07T09:03:45.233556018Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233571775Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233585244Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233600812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233638430Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233676851Z" level=info msg="runtime interface created" Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233689472Z" level=info msg="created NRI interface" Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233706364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233734291Z" level=info msg="Connect containerd service" Jul 7 09:03:45.235045 containerd[1583]: time="2025-07-07T09:03:45.233773910Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 09:03:45.239581 containerd[1583]: time="2025-07-07T09:03:45.238546740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 09:03:45.259513 systemd-hostnamed[1617]: Hostname set to (static) Jul 7 09:03:45.330135 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 09:03:45.330554 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 09:03:45.419825 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 09:03:45.465036 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 09:03:45.470139 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 09:03:45.508329 containerd[1583]: time="2025-07-07T09:03:45.508042653Z" level=info msg="Start subscribing containerd event" Jul 7 09:03:45.508329 containerd[1583]: time="2025-07-07T09:03:45.508119271Z" level=info msg="Start recovering state" Jul 7 09:03:45.508661 containerd[1583]: time="2025-07-07T09:03:45.508501996Z" level=info msg="Start event monitor" Jul 7 09:03:45.508661 containerd[1583]: time="2025-07-07T09:03:45.508529858Z" level=info msg="Start cni network conf syncer for default" Jul 7 09:03:45.508661 containerd[1583]: time="2025-07-07T09:03:45.508577677Z" level=info msg="Start streaming server" Jul 7 09:03:45.508661 containerd[1583]: time="2025-07-07T09:03:45.508613221Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 09:03:45.508661 containerd[1583]: time="2025-07-07T09:03:45.508634758Z" level=info msg="runtime interface starting up..." Jul 7 09:03:45.509220 containerd[1583]: time="2025-07-07T09:03:45.508974022Z" level=info msg="starting plugins..." Jul 7 09:03:45.509220 containerd[1583]: time="2025-07-07T09:03:45.509018023Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 09:03:45.509709 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 09:03:45.511046 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 09:03:45.516268 containerd[1583]: time="2025-07-07T09:03:45.516165921Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 09:03:45.517282 containerd[1583]: time="2025-07-07T09:03:45.517241579Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 09:03:45.517439 containerd[1583]: time="2025-07-07T09:03:45.517366051Z" level=info msg="containerd successfully booted in 0.362843s" Jul 7 09:03:45.518992 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 09:03:45.520762 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 09:03:45.572403 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 09:03:45.584379 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 09:03:45.596080 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 09:03:45.598340 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 09:03:45.648327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:03:45.835458 systemd-networkd[1513]: eth0: Gained IPv6LL Jul 7 09:03:45.838152 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Jul 7 09:03:45.844099 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 09:03:45.857303 systemd-logind[1560]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 09:03:45.934918 systemd-logind[1560]: Watching system buttons on /dev/input/event3 (Power Button) Jul 7 09:03:45.945166 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 09:03:45.986464 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:03:46.082417 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 09:03:46.222122 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 09:03:46.297082 tar[1565]: linux-amd64/README.md Jul 7 09:03:46.306438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:03:46.330472 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 09:03:47.222448 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:47.222554 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:47.273703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:03:47.288406 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:03:47.294223 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Jul 7 09:03:47.295761 systemd-networkd[1513]: eth0: Ignoring DHCPv6 address 2a02:1348:179:82d2:24:19ff:fee6:b4a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:82d2:24:19ff:fee6:b4a/64 assigned by NDisc. Jul 7 09:03:47.295772 systemd-networkd[1513]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 09:03:47.894248 kubelet[1718]: E0707 09:03:47.894166 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:03:47.897418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:03:47.897844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:03:47.898742 systemd[1]: kubelet.service: Consumed 1.106s CPU time, 263.6M memory peak. Jul 7 09:03:48.779756 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Jul 7 09:03:49.235950 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:49.247940 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:49.765535 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 09:03:49.768272 systemd[1]: Started sshd@0-10.230.11.74:22-139.178.89.65:58092.service - OpenSSH per-connection server daemon (139.178.89.65:58092). Jul 7 09:03:50.658949 login[1686]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 09:03:50.680740 login[1685]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 09:03:50.691619 systemd-logind[1560]: New session 1 of user core. Jul 7 09:03:50.695607 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 09:03:50.697879 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 09:03:50.702964 systemd-logind[1560]: New session 2 of user core. Jul 7 09:03:50.714756 sshd[1729]: Accepted publickey for core from 139.178.89.65 port 58092 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:03:50.720083 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:03:50.728724 systemd-logind[1560]: New session 3 of user core. Jul 7 09:03:50.738951 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 09:03:50.743259 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 09:03:50.763875 (systemd)[1737]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 09:03:50.768300 systemd-logind[1560]: New session c1 of user core. Jul 7 09:03:50.978835 systemd[1737]: Queued start job for default target default.target. Jul 7 09:03:50.985957 systemd[1737]: Created slice app.slice - User Application Slice. Jul 7 09:03:50.986001 systemd[1737]: Reached target paths.target - Paths. Jul 7 09:03:50.986080 systemd[1737]: Reached target timers.target - Timers. Jul 7 09:03:50.988457 systemd[1737]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 09:03:51.020940 systemd[1737]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 09:03:51.021167 systemd[1737]: Reached target sockets.target - Sockets. Jul 7 09:03:51.021257 systemd[1737]: Reached target basic.target - Basic System. Jul 7 09:03:51.021332 systemd[1737]: Reached target default.target - Main User Target. Jul 7 09:03:51.021395 systemd[1737]: Startup finished in 242ms. Jul 7 09:03:51.021482 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 09:03:51.035669 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 09:03:51.038856 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 09:03:51.040945 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 09:03:51.684416 systemd[1]: Started sshd@1-10.230.11.74:22-139.178.89.65:58094.service - OpenSSH per-connection server daemon (139.178.89.65:58094). Jul 7 09:03:52.599382 sshd[1771]: Accepted publickey for core from 139.178.89.65 port 58094 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:03:52.601745 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:03:52.611587 systemd-logind[1560]: New session 4 of user core. Jul 7 09:03:52.620184 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 09:03:53.219965 sshd[1773]: Connection closed by 139.178.89.65 port 58094 Jul 7 09:03:53.221091 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Jul 7 09:03:53.225979 systemd-logind[1560]: Session 4 logged out. Waiting for processes to exit. Jul 7 09:03:53.226818 systemd[1]: sshd@1-10.230.11.74:22-139.178.89.65:58094.service: Deactivated successfully. Jul 7 09:03:53.229375 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 09:03:53.232466 systemd-logind[1560]: Removed session 4. Jul 7 09:03:53.258070 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:53.265943 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:03:53.278092 coreos-metadata[1632]: Jul 07 09:03:53.277 WARN failed to locate config-drive, using the metadata service API instead Jul 7 09:03:53.283374 coreos-metadata[1547]: Jul 07 09:03:53.283 WARN failed to locate config-drive, using the metadata service API instead Jul 7 09:03:53.303704 coreos-metadata[1632]: Jul 07 09:03:53.303 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 09:03:53.308409 coreos-metadata[1547]: Jul 07 09:03:53.308 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 09:03:53.315169 coreos-metadata[1547]: Jul 07 09:03:53.315 INFO Fetch failed with 404: resource not found Jul 7 09:03:53.315410 coreos-metadata[1547]: Jul 07 09:03:53.315 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 09:03:53.316337 coreos-metadata[1547]: Jul 07 09:03:53.316 INFO Fetch successful Jul 7 09:03:53.316607 coreos-metadata[1547]: Jul 07 09:03:53.316 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 09:03:53.329272 coreos-metadata[1547]: Jul 07 09:03:53.329 INFO Fetch successful Jul 7 09:03:53.329567 coreos-metadata[1547]: Jul 07 09:03:53.329 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 09:03:53.333074 coreos-metadata[1632]: Jul 07 09:03:53.333 INFO Fetch successful Jul 7 09:03:53.333187 coreos-metadata[1632]: Jul 07 09:03:53.333 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 09:03:53.342414 coreos-metadata[1547]: Jul 07 09:03:53.342 INFO Fetch successful Jul 7 09:03:53.342665 coreos-metadata[1547]: Jul 07 09:03:53.342 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 09:03:53.358472 coreos-metadata[1547]: Jul 07 09:03:53.358 INFO Fetch successful Jul 7 09:03:53.358832 coreos-metadata[1547]: Jul 07 09:03:53.358 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 09:03:53.363249 coreos-metadata[1632]: Jul 07 09:03:53.363 INFO Fetch successful Jul 7 09:03:53.366669 unknown[1632]: wrote ssh authorized keys file for user: core Jul 7 09:03:53.376054 coreos-metadata[1547]: Jul 07 09:03:53.375 INFO Fetch successful Jul 7 09:03:53.389607 systemd[1]: Started sshd@2-10.230.11.74:22-139.178.89.65:58100.service - OpenSSH per-connection server daemon (139.178.89.65:58100). Jul 7 09:03:53.401782 update-ssh-keys[1783]: Updated "/home/core/.ssh/authorized_keys" Jul 7 09:03:53.404812 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 09:03:53.409354 systemd[1]: Finished sshkeys.service. Jul 7 09:03:53.431106 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 09:03:53.432187 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 09:03:53.432414 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 09:03:53.432637 systemd[1]: Startup finished in 3.624s (kernel) + 37.076s (initrd) + 12.753s (userspace) = 53.454s. Jul 7 09:03:54.302471 sshd[1787]: Accepted publickey for core from 139.178.89.65 port 58100 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:03:54.304413 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:03:54.311003 systemd-logind[1560]: New session 5 of user core. Jul 7 09:03:54.318219 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 09:03:54.926820 sshd[1794]: Connection closed by 139.178.89.65 port 58100 Jul 7 09:03:54.926639 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jul 7 09:03:54.930910 systemd[1]: sshd@2-10.230.11.74:22-139.178.89.65:58100.service: Deactivated successfully. Jul 7 09:03:54.933159 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 09:03:54.935840 systemd-logind[1560]: Session 5 logged out. Waiting for processes to exit. Jul 7 09:03:54.937388 systemd-logind[1560]: Removed session 5. Jul 7 09:03:58.039871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 09:03:58.042856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:03:58.222400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:03:58.237473 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:03:58.328159 kubelet[1807]: E0707 09:03:58.327780 1807 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:03:58.333405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:03:58.333711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:03:58.334446 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.4M memory peak. Jul 7 09:04:05.092603 systemd[1]: Started sshd@3-10.230.11.74:22-139.178.89.65:57172.service - OpenSSH per-connection server daemon (139.178.89.65:57172). Jul 7 09:04:06.006836 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 57172 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:04:06.008799 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:04:06.016237 systemd-logind[1560]: New session 6 of user core. Jul 7 09:04:06.024179 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 09:04:06.624980 sshd[1817]: Connection closed by 139.178.89.65 port 57172 Jul 7 09:04:06.625817 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jul 7 09:04:06.631208 systemd-logind[1560]: Session 6 logged out. Waiting for processes to exit. Jul 7 09:04:06.632359 systemd[1]: sshd@3-10.230.11.74:22-139.178.89.65:57172.service: Deactivated successfully. Jul 7 09:04:06.634559 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 09:04:06.636490 systemd-logind[1560]: Removed session 6. Jul 7 09:04:06.783381 systemd[1]: Started sshd@4-10.230.11.74:22-139.178.89.65:57182.service - OpenSSH per-connection server daemon (139.178.89.65:57182). Jul 7 09:04:07.695306 sshd[1823]: Accepted publickey for core from 139.178.89.65 port 57182 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:04:07.697620 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:04:07.706755 systemd-logind[1560]: New session 7 of user core. Jul 7 09:04:07.718235 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 09:04:08.308413 sshd[1825]: Connection closed by 139.178.89.65 port 57182 Jul 7 09:04:08.309345 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Jul 7 09:04:08.314536 systemd[1]: sshd@4-10.230.11.74:22-139.178.89.65:57182.service: Deactivated successfully. Jul 7 09:04:08.317244 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 09:04:08.319518 systemd-logind[1560]: Session 7 logged out. Waiting for processes to exit. Jul 7 09:04:08.321371 systemd-logind[1560]: Removed session 7. Jul 7 09:04:08.465227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 09:04:08.467599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:08.470193 systemd[1]: Started sshd@5-10.230.11.74:22-139.178.89.65:57192.service - OpenSSH per-connection server daemon (139.178.89.65:57192). Jul 7 09:04:08.637307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:08.648383 (kubelet)[1841]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:04:08.757755 kubelet[1841]: E0707 09:04:08.757679 1841 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:04:08.760196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:04:08.760474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:04:08.761012 systemd[1]: kubelet.service: Consumed 200ms CPU time, 109M memory peak. Jul 7 09:04:09.365504 sshd[1832]: Accepted publickey for core from 139.178.89.65 port 57192 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:04:09.367115 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:04:09.375392 systemd-logind[1560]: New session 8 of user core. Jul 7 09:04:09.382158 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 09:04:09.981023 sshd[1849]: Connection closed by 139.178.89.65 port 57192 Jul 7 09:04:09.980806 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Jul 7 09:04:09.985620 systemd-logind[1560]: Session 8 logged out. Waiting for processes to exit. Jul 7 09:04:09.986083 systemd[1]: sshd@5-10.230.11.74:22-139.178.89.65:57192.service: Deactivated successfully. Jul 7 09:04:09.988822 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 09:04:09.992071 systemd-logind[1560]: Removed session 8. Jul 7 09:04:10.138626 systemd[1]: Started sshd@6-10.230.11.74:22-139.178.89.65:44076.service - OpenSSH per-connection server daemon (139.178.89.65:44076). Jul 7 09:04:11.038369 sshd[1855]: Accepted publickey for core from 139.178.89.65 port 44076 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:04:11.040535 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:04:11.049033 systemd-logind[1560]: New session 9 of user core. Jul 7 09:04:11.056139 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 09:04:11.526114 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 09:04:11.526605 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:04:11.544995 sudo[1858]: pam_unix(sudo:session): session closed for user root Jul 7 09:04:11.687628 sshd[1857]: Connection closed by 139.178.89.65 port 44076 Jul 7 09:04:11.688862 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Jul 7 09:04:11.695763 systemd[1]: sshd@6-10.230.11.74:22-139.178.89.65:44076.service: Deactivated successfully. Jul 7 09:04:11.698529 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 09:04:11.699764 systemd-logind[1560]: Session 9 logged out. Waiting for processes to exit. Jul 7 09:04:11.702077 systemd-logind[1560]: Removed session 9. Jul 7 09:04:11.849869 systemd[1]: Started sshd@7-10.230.11.74:22-139.178.89.65:44082.service - OpenSSH per-connection server daemon (139.178.89.65:44082). Jul 7 09:04:12.767878 sshd[1864]: Accepted publickey for core from 139.178.89.65 port 44082 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:04:12.770220 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:04:12.778139 systemd-logind[1560]: New session 10 of user core. Jul 7 09:04:12.787181 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 09:04:13.247496 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 09:04:13.247964 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:04:13.260084 sudo[1868]: pam_unix(sudo:session): session closed for user root Jul 7 09:04:13.268965 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 09:04:13.269436 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:04:13.284878 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 09:04:13.352636 augenrules[1890]: No rules Jul 7 09:04:13.354343 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 09:04:13.354741 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 09:04:13.356657 sudo[1867]: pam_unix(sudo:session): session closed for user root Jul 7 09:04:13.500789 sshd[1866]: Connection closed by 139.178.89.65 port 44082 Jul 7 09:04:13.501908 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Jul 7 09:04:13.507616 systemd[1]: sshd@7-10.230.11.74:22-139.178.89.65:44082.service: Deactivated successfully. Jul 7 09:04:13.509822 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 09:04:13.510987 systemd-logind[1560]: Session 10 logged out. Waiting for processes to exit. Jul 7 09:04:13.512648 systemd-logind[1560]: Removed session 10. Jul 7 09:04:13.660170 systemd[1]: Started sshd@8-10.230.11.74:22-139.178.89.65:44090.service - OpenSSH per-connection server daemon (139.178.89.65:44090). Jul 7 09:04:14.573170 sshd[1899]: Accepted publickey for core from 139.178.89.65 port 44090 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:04:14.574976 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:04:14.582978 systemd-logind[1560]: New session 11 of user core. Jul 7 09:04:14.591220 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 09:04:15.049618 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 09:04:15.050556 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:04:15.610096 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 09:04:15.624483 (dockerd)[1920]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 09:04:15.972647 dockerd[1920]: time="2025-07-07T09:04:15.971535278Z" level=info msg="Starting up" Jul 7 09:04:15.974957 dockerd[1920]: time="2025-07-07T09:04:15.974797102Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 09:04:16.010693 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1454598607-merged.mount: Deactivated successfully. Jul 7 09:04:16.042591 dockerd[1920]: time="2025-07-07T09:04:16.042545673Z" level=info msg="Loading containers: start." Jul 7 09:04:16.058953 kernel: Initializing XFRM netlink socket Jul 7 09:04:16.380204 systemd-timesyncd[1490]: Network configuration changed, trying to establish connection. Jul 7 09:04:16.437149 systemd-networkd[1513]: docker0: Link UP Jul 7 09:04:16.441559 dockerd[1920]: time="2025-07-07T09:04:16.441475236Z" level=info msg="Loading containers: done." Jul 7 09:04:16.464530 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2020539932-merged.mount: Deactivated successfully. Jul 7 09:04:16.465115 dockerd[1920]: time="2025-07-07T09:04:16.465053967Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 09:04:16.465214 dockerd[1920]: time="2025-07-07T09:04:16.465170722Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 09:04:16.465377 dockerd[1920]: time="2025-07-07T09:04:16.465340195Z" level=info msg="Initializing buildkit" Jul 7 09:04:16.492956 dockerd[1920]: time="2025-07-07T09:04:16.492767806Z" level=info msg="Completed buildkit initialization" Jul 7 09:04:16.502291 dockerd[1920]: time="2025-07-07T09:04:16.502213783Z" level=info msg="Daemon has completed initialization" Jul 7 09:04:16.502442 dockerd[1920]: time="2025-07-07T09:04:16.502313471Z" level=info msg="API listen on /run/docker.sock" Jul 7 09:04:16.502801 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 09:04:17.274373 containerd[1583]: time="2025-07-07T09:04:17.274270735Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" Jul 7 09:04:17.321697 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 09:04:18.636938 systemd-timesyncd[1490]: Contacted time server [2a02:ac00:2:1::5]:123 (2.flatcar.pool.ntp.org). Jul 7 09:04:18.636972 systemd-resolved[1462]: Clock change detected. Flushing caches. Jul 7 09:04:18.637016 systemd-timesyncd[1490]: Initial clock synchronization to Mon 2025-07-07 09:04:18.636531 UTC. Jul 7 09:04:19.297912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount698040857.mount: Deactivated successfully. Jul 7 09:04:19.993201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 09:04:19.996622 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:20.176498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:20.186795 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:04:20.285642 kubelet[2189]: E0707 09:04:20.285483 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:04:20.289347 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:04:20.289597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:04:20.290031 systemd[1]: kubelet.service: Consumed 211ms CPU time, 110.3M memory peak. Jul 7 09:04:21.106350 containerd[1583]: time="2025-07-07T09:04:21.106245848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:21.107863 containerd[1583]: time="2025-07-07T09:04:21.107813136Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" Jul 7 09:04:21.108931 containerd[1583]: time="2025-07-07T09:04:21.108866507Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:21.112013 containerd[1583]: time="2025-07-07T09:04:21.111979627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:21.113873 containerd[1583]: time="2025-07-07T09:04:21.113406162Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.635653735s" Jul 7 09:04:21.113873 containerd[1583]: time="2025-07-07T09:04:21.113465055Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" Jul 7 09:04:21.114817 containerd[1583]: time="2025-07-07T09:04:21.114769738Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" Jul 7 09:04:23.662803 containerd[1583]: time="2025-07-07T09:04:23.662715049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:23.664213 containerd[1583]: time="2025-07-07T09:04:23.664150632Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" Jul 7 09:04:23.665367 containerd[1583]: time="2025-07-07T09:04:23.665270498Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:23.669199 containerd[1583]: time="2025-07-07T09:04:23.669105665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:23.670552 containerd[1583]: time="2025-07-07T09:04:23.670270116Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.555454147s" Jul 7 09:04:23.670552 containerd[1583]: time="2025-07-07T09:04:23.670341170Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" Jul 7 09:04:23.671690 containerd[1583]: time="2025-07-07T09:04:23.671634484Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" Jul 7 09:04:26.089872 containerd[1583]: time="2025-07-07T09:04:26.089807640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:26.091066 containerd[1583]: time="2025-07-07T09:04:26.091032345Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" Jul 7 09:04:26.091879 containerd[1583]: time="2025-07-07T09:04:26.091801318Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:26.094978 containerd[1583]: time="2025-07-07T09:04:26.094920312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:26.096485 containerd[1583]: time="2025-07-07T09:04:26.096265611Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.424588232s" Jul 7 09:04:26.096485 containerd[1583]: time="2025-07-07T09:04:26.096327576Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" Jul 7 09:04:26.097303 containerd[1583]: time="2025-07-07T09:04:26.097260529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" Jul 7 09:04:27.857931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265851695.mount: Deactivated successfully. Jul 7 09:04:28.576397 containerd[1583]: time="2025-07-07T09:04:28.576313795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:28.578345 containerd[1583]: time="2025-07-07T09:04:28.578316427Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" Jul 7 09:04:28.579578 containerd[1583]: time="2025-07-07T09:04:28.579487289Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:28.581756 containerd[1583]: time="2025-07-07T09:04:28.581642451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:28.582616 containerd[1583]: time="2025-07-07T09:04:28.582443203Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.485016198s" Jul 7 09:04:28.582616 containerd[1583]: time="2025-07-07T09:04:28.582484281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" Jul 7 09:04:28.583321 containerd[1583]: time="2025-07-07T09:04:28.583269594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 09:04:29.820167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076078082.mount: Deactivated successfully. Jul 7 09:04:30.493104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 09:04:30.497571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:30.719473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:30.729948 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:04:30.808746 kubelet[2272]: E0707 09:04:30.808128 2272 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:04:30.811568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:04:30.811806 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:04:30.812439 systemd[1]: kubelet.service: Consumed 215ms CPU time, 110.4M memory peak. Jul 7 09:04:31.193434 update_engine[1561]: I20250707 09:04:31.192148 1561 update_attempter.cc:509] Updating boot flags... Jul 7 09:04:31.380221 containerd[1583]: time="2025-07-07T09:04:31.378887747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:31.380777 containerd[1583]: time="2025-07-07T09:04:31.380588360Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 09:04:31.381938 containerd[1583]: time="2025-07-07T09:04:31.381897907Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:31.390365 containerd[1583]: time="2025-07-07T09:04:31.389602440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:31.393317 containerd[1583]: time="2025-07-07T09:04:31.392597511Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.809154756s" Jul 7 09:04:31.393412 containerd[1583]: time="2025-07-07T09:04:31.393330596Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 09:04:31.395500 containerd[1583]: time="2025-07-07T09:04:31.395465840Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 09:04:32.658538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431019980.mount: Deactivated successfully. Jul 7 09:04:32.664435 containerd[1583]: time="2025-07-07T09:04:32.664367483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 09:04:32.665694 containerd[1583]: time="2025-07-07T09:04:32.665642568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 09:04:32.666791 containerd[1583]: time="2025-07-07T09:04:32.666731471Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 09:04:32.669331 containerd[1583]: time="2025-07-07T09:04:32.669249615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 09:04:32.670456 containerd[1583]: time="2025-07-07T09:04:32.670180214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.274670837s" Jul 7 09:04:32.670456 containerd[1583]: time="2025-07-07T09:04:32.670225804Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 09:04:32.670970 containerd[1583]: time="2025-07-07T09:04:32.670937803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 09:04:33.944996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086176424.mount: Deactivated successfully. Jul 7 09:04:38.558756 containerd[1583]: time="2025-07-07T09:04:38.558683836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:38.560340 containerd[1583]: time="2025-07-07T09:04:38.560303526Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" Jul 7 09:04:38.560823 containerd[1583]: time="2025-07-07T09:04:38.560790793Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:38.564229 containerd[1583]: time="2025-07-07T09:04:38.564197553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:04:38.565807 containerd[1583]: time="2025-07-07T09:04:38.565756603Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.894776362s" Jul 7 09:04:38.565911 containerd[1583]: time="2025-07-07T09:04:38.565811514Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 09:04:40.993890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 09:04:40.999492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:41.258488 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:41.268130 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:04:41.353982 kubelet[2383]: E0707 09:04:41.353908 2383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:04:41.357059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:04:41.357556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:04:41.358403 systemd[1]: kubelet.service: Consumed 196ms CPU time, 110.4M memory peak. Jul 7 09:04:42.211274 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:42.211747 systemd[1]: kubelet.service: Consumed 196ms CPU time, 110.4M memory peak. Jul 7 09:04:42.214686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:42.250437 systemd[1]: Reload requested from client PID 2397 ('systemctl') (unit session-11.scope)... Jul 7 09:04:42.250653 systemd[1]: Reloading... Jul 7 09:04:42.463375 zram_generator::config[2438]: No configuration found. Jul 7 09:04:42.552477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:04:42.728277 systemd[1]: Reloading finished in 476 ms. Jul 7 09:04:42.795082 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 09:04:42.795220 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 09:04:42.795673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:42.795752 systemd[1]: kubelet.service: Consumed 127ms CPU time, 98.1M memory peak. Jul 7 09:04:42.797920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:43.098105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:43.117144 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 09:04:43.172605 kubelet[2509]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:04:43.172605 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 09:04:43.172605 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:04:43.174370 kubelet[2509]: I0707 09:04:43.173516 2509 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 09:04:44.041055 kubelet[2509]: I0707 09:04:44.040974 2509 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 09:04:44.041055 kubelet[2509]: I0707 09:04:44.041016 2509 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 09:04:44.041448 kubelet[2509]: I0707 09:04:44.041406 2509 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 09:04:44.104336 kubelet[2509]: E0707 09:04:44.103791 2509 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.11.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:44.104336 kubelet[2509]: I0707 09:04:44.104074 2509 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 09:04:44.115670 kubelet[2509]: I0707 09:04:44.115609 2509 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 09:04:44.126431 kubelet[2509]: I0707 09:04:44.126327 2509 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 09:04:44.131885 kubelet[2509]: I0707 09:04:44.131789 2509 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 09:04:44.132183 kubelet[2509]: I0707 09:04:44.131861 2509 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-djpnf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 09:04:44.133934 kubelet[2509]: I0707 09:04:44.133909 2509 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 09:04:44.133934 kubelet[2509]: I0707 09:04:44.133937 2509 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 09:04:44.135240 kubelet[2509]: I0707 09:04:44.135180 2509 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:04:44.140656 kubelet[2509]: I0707 09:04:44.140484 2509 kubelet.go:446] "Attempting to sync node with API server" Jul 7 09:04:44.140656 kubelet[2509]: I0707 09:04:44.140545 2509 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 09:04:44.142459 kubelet[2509]: I0707 09:04:44.142049 2509 kubelet.go:352] "Adding apiserver pod source" Jul 7 09:04:44.142459 kubelet[2509]: I0707 09:04:44.142092 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 09:04:44.148320 kubelet[2509]: W0707 09:04:44.148034 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.11.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-djpnf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:44.148320 kubelet[2509]: E0707 09:04:44.148121 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.11.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-djpnf.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:44.149648 kubelet[2509]: I0707 09:04:44.149474 2509 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 09:04:44.153001 kubelet[2509]: I0707 09:04:44.152976 2509 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 09:04:44.153935 kubelet[2509]: W0707 09:04:44.153824 2509 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 09:04:44.156327 kubelet[2509]: I0707 09:04:44.156027 2509 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 09:04:44.156327 kubelet[2509]: I0707 09:04:44.156086 2509 server.go:1287] "Started kubelet" Jul 7 09:04:44.157486 kubelet[2509]: W0707 09:04:44.157349 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.11.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:44.157486 kubelet[2509]: E0707 09:04:44.157410 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.11.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:44.157658 kubelet[2509]: I0707 09:04:44.157603 2509 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 09:04:44.162801 kubelet[2509]: I0707 09:04:44.162160 2509 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 09:04:44.162801 kubelet[2509]: I0707 09:04:44.162688 2509 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 09:04:44.167392 kubelet[2509]: E0707 09:04:44.164052 2509 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.11.74:6443/api/v1/namespaces/default/events\": dial tcp 10.230.11.74:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-djpnf.gb1.brightbox.com.184fecbec48eeb79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-djpnf.gb1.brightbox.com,UID:srv-djpnf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-djpnf.gb1.brightbox.com,},FirstTimestamp:2025-07-07 09:04:44.156054393 +0000 UTC m=+1.033946204,LastTimestamp:2025-07-07 09:04:44.156054393 +0000 UTC m=+1.033946204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-djpnf.gb1.brightbox.com,}" Jul 7 09:04:44.170629 kubelet[2509]: I0707 09:04:44.168741 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 09:04:44.170629 kubelet[2509]: I0707 09:04:44.169885 2509 server.go:479] "Adding debug handlers to kubelet server" Jul 7 09:04:44.176077 kubelet[2509]: I0707 09:04:44.176054 2509 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 09:04:44.177184 kubelet[2509]: I0707 09:04:44.177157 2509 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 09:04:44.186237 kubelet[2509]: I0707 09:04:44.186209 2509 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 09:04:44.186468 kubelet[2509]: I0707 09:04:44.186448 2509 reconciler.go:26] "Reconciler: start to sync state" Jul 7 09:04:44.186574 kubelet[2509]: E0707 09:04:44.177233 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-djpnf.gb1.brightbox.com\" not found" Jul 7 09:04:44.187538 kubelet[2509]: E0707 09:04:44.187488 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.11.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-djpnf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.11.74:6443: connect: connection refused" interval="200ms" Jul 7 09:04:44.188716 kubelet[2509]: I0707 09:04:44.188690 2509 factory.go:221] Registration of the systemd container factory successfully Jul 7 09:04:44.189006 kubelet[2509]: I0707 09:04:44.188979 2509 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 09:04:44.193305 kubelet[2509]: I0707 09:04:44.193243 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 09:04:44.195815 kubelet[2509]: I0707 09:04:44.195491 2509 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 09:04:44.195901 kubelet[2509]: I0707 09:04:44.195836 2509 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 09:04:44.195901 kubelet[2509]: I0707 09:04:44.195880 2509 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 09:04:44.195901 kubelet[2509]: I0707 09:04:44.195895 2509 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 09:04:44.196022 kubelet[2509]: E0707 09:04:44.195970 2509 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 09:04:44.196567 kubelet[2509]: W0707 09:04:44.196523 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.11.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:44.197141 kubelet[2509]: E0707 09:04:44.197108 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.11.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:44.203529 kubelet[2509]: E0707 09:04:44.203258 2509 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 09:04:44.203616 kubelet[2509]: I0707 09:04:44.203582 2509 factory.go:221] Registration of the containerd container factory successfully Jul 7 09:04:44.204589 kubelet[2509]: W0707 09:04:44.204544 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.11.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:44.204913 kubelet[2509]: E0707 09:04:44.204833 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.11.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:44.242632 kubelet[2509]: I0707 09:04:44.242596 2509 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 09:04:44.247772 kubelet[2509]: I0707 09:04:44.242775 2509 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 09:04:44.247772 kubelet[2509]: I0707 09:04:44.242812 2509 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:04:44.287255 kubelet[2509]: E0707 09:04:44.287193 2509 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-djpnf.gb1.brightbox.com\" not found" Jul 7 09:04:44.295570 kubelet[2509]: I0707 09:04:44.295433 2509 policy_none.go:49] "None policy: Start" Jul 7 09:04:44.295853 kubelet[2509]: I0707 09:04:44.295699 2509 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 09:04:44.295853 kubelet[2509]: I0707 09:04:44.295777 2509 state_mem.go:35] "Initializing new in-memory state store" Jul 7 09:04:44.296374 kubelet[2509]: E0707 09:04:44.296340 2509 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 09:04:44.305001 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 09:04:44.330954 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 09:04:44.336265 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 09:04:44.349310 kubelet[2509]: I0707 09:04:44.348735 2509 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 09:04:44.349310 kubelet[2509]: I0707 09:04:44.349047 2509 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 09:04:44.349310 kubelet[2509]: I0707 09:04:44.349074 2509 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 09:04:44.349586 kubelet[2509]: I0707 09:04:44.349565 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 09:04:44.351238 kubelet[2509]: E0707 09:04:44.351208 2509 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 09:04:44.351510 kubelet[2509]: E0707 09:04:44.351488 2509 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-djpnf.gb1.brightbox.com\" not found" Jul 7 09:04:44.389885 kubelet[2509]: E0707 09:04:44.389846 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.11.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-djpnf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.11.74:6443: connect: connection refused" interval="400ms" Jul 7 09:04:44.451663 kubelet[2509]: I0707 09:04:44.451614 2509 kubelet_node_status.go:75] "Attempting to register node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.452580 kubelet[2509]: E0707 09:04:44.452525 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.11.74:6443/api/v1/nodes\": dial tcp 10.230.11.74:6443: connect: connection refused" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.511566 systemd[1]: Created slice kubepods-burstable-pod897169e64e52fd2b4e9fdb5e079787f6.slice - libcontainer container kubepods-burstable-pod897169e64e52fd2b4e9fdb5e079787f6.slice. Jul 7 09:04:44.521932 kubelet[2509]: E0707 09:04:44.521131 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.525584 systemd[1]: Created slice kubepods-burstable-podad3719f0404ec11ea2ec88075eb7ec76.slice - libcontainer container kubepods-burstable-podad3719f0404ec11ea2ec88075eb7ec76.slice. Jul 7 09:04:44.545162 kubelet[2509]: E0707 09:04:44.545128 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.549486 systemd[1]: Created slice kubepods-burstable-podafe685dcab6c323a7093b2f14a71cd6d.slice - libcontainer container kubepods-burstable-podafe685dcab6c323a7093b2f14a71cd6d.slice. Jul 7 09:04:44.552947 kubelet[2509]: E0707 09:04:44.552909 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588393 kubelet[2509]: I0707 09:04:44.588342 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/897169e64e52fd2b4e9fdb5e079787f6-ca-certs\") pod \"kube-apiserver-srv-djpnf.gb1.brightbox.com\" (UID: \"897169e64e52fd2b4e9fdb5e079787f6\") " pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588393 kubelet[2509]: I0707 09:04:44.588393 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/897169e64e52fd2b4e9fdb5e079787f6-k8s-certs\") pod \"kube-apiserver-srv-djpnf.gb1.brightbox.com\" (UID: \"897169e64e52fd2b4e9fdb5e079787f6\") " pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588578 kubelet[2509]: I0707 09:04:44.588425 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-ca-certs\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588578 kubelet[2509]: I0707 09:04:44.588456 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/afe685dcab6c323a7093b2f14a71cd6d-kubeconfig\") pod \"kube-scheduler-srv-djpnf.gb1.brightbox.com\" (UID: \"afe685dcab6c323a7093b2f14a71cd6d\") " pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588578 kubelet[2509]: I0707 09:04:44.588483 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588578 kubelet[2509]: I0707 09:04:44.588511 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/897169e64e52fd2b4e9fdb5e079787f6-usr-share-ca-certificates\") pod \"kube-apiserver-srv-djpnf.gb1.brightbox.com\" (UID: \"897169e64e52fd2b4e9fdb5e079787f6\") " pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588578 kubelet[2509]: I0707 09:04:44.588535 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-flexvolume-dir\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588802 kubelet[2509]: I0707 09:04:44.588563 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-k8s-certs\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.588802 kubelet[2509]: I0707 09:04:44.588589 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-kubeconfig\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.656671 kubelet[2509]: I0707 09:04:44.656628 2509 kubelet_node_status.go:75] "Attempting to register node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.657184 kubelet[2509]: E0707 09:04:44.657152 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.11.74:6443/api/v1/nodes\": dial tcp 10.230.11.74:6443: connect: connection refused" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:44.791624 kubelet[2509]: E0707 09:04:44.791559 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.11.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-djpnf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.11.74:6443: connect: connection refused" interval="800ms" Jul 7 09:04:44.825181 containerd[1583]: time="2025-07-07T09:04:44.825030778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-djpnf.gb1.brightbox.com,Uid:897169e64e52fd2b4e9fdb5e079787f6,Namespace:kube-system,Attempt:0,}" Jul 7 09:04:44.855708 containerd[1583]: time="2025-07-07T09:04:44.855436794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-djpnf.gb1.brightbox.com,Uid:ad3719f0404ec11ea2ec88075eb7ec76,Namespace:kube-system,Attempt:0,}" Jul 7 09:04:44.859883 containerd[1583]: time="2025-07-07T09:04:44.859757657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-djpnf.gb1.brightbox.com,Uid:afe685dcab6c323a7093b2f14a71cd6d,Namespace:kube-system,Attempt:0,}" Jul 7 09:04:44.992968 containerd[1583]: time="2025-07-07T09:04:44.992909031Z" level=info msg="connecting to shim c99d58a9db2a5274650173bc608b90a3505d173a6e16a91fb3bc4b156f43822a" address="unix:///run/containerd/s/482b5ede1ef59c3d68921e82f90b3eb51073371ca96f1638b4eb13c9998077c6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:04:44.995815 containerd[1583]: time="2025-07-07T09:04:44.995713197Z" level=info msg="connecting to shim 0d4e8be5898480baf7e762ee25583d239fe615293ca67cec48fb34f573bae333" address="unix:///run/containerd/s/39ccff115fb1aa1ecb44be4994377006526cff5b46e09c3f488a5f3989ed6894" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:04:44.999048 containerd[1583]: time="2025-07-07T09:04:44.999009286Z" level=info msg="connecting to shim 28f9572576eac7b7b1daecc07e34de9e2877ad3b47826791f0d1ff589d03d532" address="unix:///run/containerd/s/4a50ef1ff97d01033adbdf59f5cc68ae0f93006d26e8e1d3ccccb56e86cdb1ca" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:04:45.060573 kubelet[2509]: I0707 09:04:45.060531 2509 kubelet_node_status.go:75] "Attempting to register node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:45.061133 kubelet[2509]: E0707 09:04:45.061090 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.11.74:6443/api/v1/nodes\": dial tcp 10.230.11.74:6443: connect: connection refused" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:45.130844 kubelet[2509]: W0707 09:04:45.130248 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.11.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:45.130844 kubelet[2509]: E0707 09:04:45.130355 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.11.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:45.165644 systemd[1]: Started cri-containerd-0d4e8be5898480baf7e762ee25583d239fe615293ca67cec48fb34f573bae333.scope - libcontainer container 0d4e8be5898480baf7e762ee25583d239fe615293ca67cec48fb34f573bae333. Jul 7 09:04:45.168388 systemd[1]: Started cri-containerd-28f9572576eac7b7b1daecc07e34de9e2877ad3b47826791f0d1ff589d03d532.scope - libcontainer container 28f9572576eac7b7b1daecc07e34de9e2877ad3b47826791f0d1ff589d03d532. Jul 7 09:04:45.171693 systemd[1]: Started cri-containerd-c99d58a9db2a5274650173bc608b90a3505d173a6e16a91fb3bc4b156f43822a.scope - libcontainer container c99d58a9db2a5274650173bc608b90a3505d173a6e16a91fb3bc4b156f43822a. Jul 7 09:04:45.270976 kubelet[2509]: W0707 09:04:45.269123 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.11.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:45.270976 kubelet[2509]: E0707 09:04:45.270831 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.11.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:45.309223 containerd[1583]: time="2025-07-07T09:04:45.309144598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-djpnf.gb1.brightbox.com,Uid:ad3719f0404ec11ea2ec88075eb7ec76,Namespace:kube-system,Attempt:0,} returns sandbox id \"28f9572576eac7b7b1daecc07e34de9e2877ad3b47826791f0d1ff589d03d532\"" Jul 7 09:04:45.311344 containerd[1583]: time="2025-07-07T09:04:45.311307847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-djpnf.gb1.brightbox.com,Uid:afe685dcab6c323a7093b2f14a71cd6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d4e8be5898480baf7e762ee25583d239fe615293ca67cec48fb34f573bae333\"" Jul 7 09:04:45.317833 containerd[1583]: time="2025-07-07T09:04:45.317473641Z" level=info msg="CreateContainer within sandbox \"28f9572576eac7b7b1daecc07e34de9e2877ad3b47826791f0d1ff589d03d532\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 09:04:45.317948 containerd[1583]: time="2025-07-07T09:04:45.317919459Z" level=info msg="CreateContainer within sandbox \"0d4e8be5898480baf7e762ee25583d239fe615293ca67cec48fb34f573bae333\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 09:04:45.326594 containerd[1583]: time="2025-07-07T09:04:45.326542990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-djpnf.gb1.brightbox.com,Uid:897169e64e52fd2b4e9fdb5e079787f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c99d58a9db2a5274650173bc608b90a3505d173a6e16a91fb3bc4b156f43822a\"" Jul 7 09:04:45.330607 containerd[1583]: time="2025-07-07T09:04:45.330569163Z" level=info msg="CreateContainer within sandbox \"c99d58a9db2a5274650173bc608b90a3505d173a6e16a91fb3bc4b156f43822a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 09:04:45.338035 containerd[1583]: time="2025-07-07T09:04:45.338002815Z" level=info msg="Container d75535b22edc756e92fbbc0c0f21525da7f5e36166b1a4e383758281048b8147: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:04:45.339808 containerd[1583]: time="2025-07-07T09:04:45.339774007Z" level=info msg="Container d0beeb585e905c47e083e415e72e66396d45babec3409631bf8764b4901709e0: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:04:45.341926 containerd[1583]: time="2025-07-07T09:04:45.341882387Z" level=info msg="Container 96368032930934adfc393e1f5b6359844c69a123bd1884daee21229d499f978b: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:04:45.346447 containerd[1583]: time="2025-07-07T09:04:45.346398101Z" level=info msg="CreateContainer within sandbox \"0d4e8be5898480baf7e762ee25583d239fe615293ca67cec48fb34f573bae333\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d75535b22edc756e92fbbc0c0f21525da7f5e36166b1a4e383758281048b8147\"" Jul 7 09:04:45.347259 containerd[1583]: time="2025-07-07T09:04:45.347221754Z" level=info msg="StartContainer for \"d75535b22edc756e92fbbc0c0f21525da7f5e36166b1a4e383758281048b8147\"" Jul 7 09:04:45.349317 containerd[1583]: time="2025-07-07T09:04:45.349231174Z" level=info msg="connecting to shim d75535b22edc756e92fbbc0c0f21525da7f5e36166b1a4e383758281048b8147" address="unix:///run/containerd/s/39ccff115fb1aa1ecb44be4994377006526cff5b46e09c3f488a5f3989ed6894" protocol=ttrpc version=3 Jul 7 09:04:45.349976 containerd[1583]: time="2025-07-07T09:04:45.349904613Z" level=info msg="CreateContainer within sandbox \"28f9572576eac7b7b1daecc07e34de9e2877ad3b47826791f0d1ff589d03d532\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d0beeb585e905c47e083e415e72e66396d45babec3409631bf8764b4901709e0\"" Jul 7 09:04:45.351007 containerd[1583]: time="2025-07-07T09:04:45.350956516Z" level=info msg="StartContainer for \"d0beeb585e905c47e083e415e72e66396d45babec3409631bf8764b4901709e0\"" Jul 7 09:04:45.353098 containerd[1583]: time="2025-07-07T09:04:45.353043274Z" level=info msg="connecting to shim d0beeb585e905c47e083e415e72e66396d45babec3409631bf8764b4901709e0" address="unix:///run/containerd/s/4a50ef1ff97d01033adbdf59f5cc68ae0f93006d26e8e1d3ccccb56e86cdb1ca" protocol=ttrpc version=3 Jul 7 09:04:45.354857 containerd[1583]: time="2025-07-07T09:04:45.354819416Z" level=info msg="CreateContainer within sandbox \"c99d58a9db2a5274650173bc608b90a3505d173a6e16a91fb3bc4b156f43822a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"96368032930934adfc393e1f5b6359844c69a123bd1884daee21229d499f978b\"" Jul 7 09:04:45.356369 containerd[1583]: time="2025-07-07T09:04:45.355296673Z" level=info msg="StartContainer for \"96368032930934adfc393e1f5b6359844c69a123bd1884daee21229d499f978b\"" Jul 7 09:04:45.359676 containerd[1583]: time="2025-07-07T09:04:45.359638939Z" level=info msg="connecting to shim 96368032930934adfc393e1f5b6359844c69a123bd1884daee21229d499f978b" address="unix:///run/containerd/s/482b5ede1ef59c3d68921e82f90b3eb51073371ca96f1638b4eb13c9998077c6" protocol=ttrpc version=3 Jul 7 09:04:45.384483 systemd[1]: Started cri-containerd-d0beeb585e905c47e083e415e72e66396d45babec3409631bf8764b4901709e0.scope - libcontainer container d0beeb585e905c47e083e415e72e66396d45babec3409631bf8764b4901709e0. Jul 7 09:04:45.387650 kubelet[2509]: W0707 09:04:45.387591 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.11.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-djpnf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:45.387774 kubelet[2509]: E0707 09:04:45.387667 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.11.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-djpnf.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:45.396671 systemd[1]: Started cri-containerd-96368032930934adfc393e1f5b6359844c69a123bd1884daee21229d499f978b.scope - libcontainer container 96368032930934adfc393e1f5b6359844c69a123bd1884daee21229d499f978b. Jul 7 09:04:45.409496 systemd[1]: Started cri-containerd-d75535b22edc756e92fbbc0c0f21525da7f5e36166b1a4e383758281048b8147.scope - libcontainer container d75535b22edc756e92fbbc0c0f21525da7f5e36166b1a4e383758281048b8147. Jul 7 09:04:45.530324 containerd[1583]: time="2025-07-07T09:04:45.529985802Z" level=info msg="StartContainer for \"d0beeb585e905c47e083e415e72e66396d45babec3409631bf8764b4901709e0\" returns successfully" Jul 7 09:04:45.532682 containerd[1583]: time="2025-07-07T09:04:45.532647226Z" level=info msg="StartContainer for \"96368032930934adfc393e1f5b6359844c69a123bd1884daee21229d499f978b\" returns successfully" Jul 7 09:04:45.551687 containerd[1583]: time="2025-07-07T09:04:45.551637948Z" level=info msg="StartContainer for \"d75535b22edc756e92fbbc0c0f21525da7f5e36166b1a4e383758281048b8147\" returns successfully" Jul 7 09:04:45.595681 kubelet[2509]: E0707 09:04:45.595540 2509 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.11.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-djpnf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.11.74:6443: connect: connection refused" interval="1.6s" Jul 7 09:04:45.727215 kubelet[2509]: W0707 09:04:45.726505 2509 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.11.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.11.74:6443: connect: connection refused Jul 7 09:04:45.727215 kubelet[2509]: E0707 09:04:45.727085 2509 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.11.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.11.74:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:04:45.869736 kubelet[2509]: I0707 09:04:45.869336 2509 kubelet_node_status.go:75] "Attempting to register node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:45.870782 kubelet[2509]: E0707 09:04:45.870565 2509 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.11.74:6443/api/v1/nodes\": dial tcp 10.230.11.74:6443: connect: connection refused" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:46.222230 kubelet[2509]: E0707 09:04:46.222013 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:46.228683 kubelet[2509]: E0707 09:04:46.228635 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:46.231975 kubelet[2509]: E0707 09:04:46.231786 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:47.236477 kubelet[2509]: E0707 09:04:47.236386 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:47.239152 kubelet[2509]: E0707 09:04:47.237395 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:47.239152 kubelet[2509]: E0707 09:04:47.237851 2509 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:47.473629 kubelet[2509]: I0707 09:04:47.473175 2509 kubelet_node_status.go:75] "Attempting to register node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.223601 kubelet[2509]: E0707 09:04:48.223546 2509 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-djpnf.gb1.brightbox.com\" not found" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.241270 kubelet[2509]: E0707 09:04:48.241090 2509 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-djpnf.gb1.brightbox.com.184fecbec48eeb79 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-djpnf.gb1.brightbox.com,UID:srv-djpnf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-djpnf.gb1.brightbox.com,},FirstTimestamp:2025-07-07 09:04:44.156054393 +0000 UTC m=+1.033946204,LastTimestamp:2025-07-07 09:04:44.156054393 +0000 UTC m=+1.033946204,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-djpnf.gb1.brightbox.com,}" Jul 7 09:04:48.264088 kubelet[2509]: I0707 09:04:48.263910 2509 kubelet_node_status.go:78] "Successfully registered node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.264088 kubelet[2509]: E0707 09:04:48.263956 2509 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-djpnf.gb1.brightbox.com\": node \"srv-djpnf.gb1.brightbox.com\" not found" Jul 7 09:04:48.278869 kubelet[2509]: I0707 09:04:48.278450 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.321873 kubelet[2509]: E0707 09:04:48.321832 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.322650 kubelet[2509]: I0707 09:04:48.322615 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.328123 kubelet[2509]: E0707 09:04:48.328085 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-djpnf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.328123 kubelet[2509]: I0707 09:04:48.328118 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:48.330674 kubelet[2509]: E0707 09:04:48.330628 2509 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-djpnf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:49.161084 kubelet[2509]: I0707 09:04:49.160793 2509 apiserver.go:52] "Watching apiserver" Jul 7 09:04:49.186745 kubelet[2509]: I0707 09:04:49.186717 2509 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 09:04:49.259331 kubelet[2509]: I0707 09:04:49.259253 2509 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:49.268734 kubelet[2509]: W0707 09:04:49.268626 2509 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:04:50.065626 systemd[1]: Reload requested from client PID 2780 ('systemctl') (unit session-11.scope)... Jul 7 09:04:50.065667 systemd[1]: Reloading... Jul 7 09:04:50.218375 zram_generator::config[2828]: No configuration found. Jul 7 09:04:50.375831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:04:50.577885 systemd[1]: Reloading finished in 511 ms. Jul 7 09:04:50.620664 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:50.635886 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 09:04:50.636334 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:50.636426 systemd[1]: kubelet.service: Consumed 1.549s CPU time, 127.6M memory peak. Jul 7 09:04:50.640092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:04:50.936107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:04:50.948850 (kubelet)[2889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 09:04:51.016478 kubelet[2889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:04:51.016478 kubelet[2889]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 09:04:51.016478 kubelet[2889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:04:51.017055 kubelet[2889]: I0707 09:04:51.016536 2889 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 09:04:51.024820 kubelet[2889]: I0707 09:04:51.024781 2889 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 09:04:51.024820 kubelet[2889]: I0707 09:04:51.024812 2889 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 09:04:51.025143 kubelet[2889]: I0707 09:04:51.025111 2889 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 09:04:51.026831 kubelet[2889]: I0707 09:04:51.026799 2889 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 09:04:51.035469 kubelet[2889]: I0707 09:04:51.035397 2889 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 09:04:51.047178 kubelet[2889]: I0707 09:04:51.047123 2889 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 09:04:51.058369 kubelet[2889]: I0707 09:04:51.058337 2889 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 09:04:51.058772 kubelet[2889]: I0707 09:04:51.058716 2889 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 09:04:51.058989 kubelet[2889]: I0707 09:04:51.058790 2889 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-djpnf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 09:04:51.059158 kubelet[2889]: I0707 09:04:51.059008 2889 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 09:04:51.059158 kubelet[2889]: I0707 09:04:51.059025 2889 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 09:04:51.059158 kubelet[2889]: I0707 09:04:51.059084 2889 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:04:51.059553 kubelet[2889]: I0707 09:04:51.059330 2889 kubelet.go:446] "Attempting to sync node with API server" Jul 7 09:04:51.059553 kubelet[2889]: I0707 09:04:51.059358 2889 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 09:04:51.059553 kubelet[2889]: I0707 09:04:51.059519 2889 kubelet.go:352] "Adding apiserver pod source" Jul 7 09:04:51.059553 kubelet[2889]: I0707 09:04:51.059539 2889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 09:04:51.068512 kubelet[2889]: I0707 09:04:51.068416 2889 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 09:04:51.068962 kubelet[2889]: I0707 09:04:51.068938 2889 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 09:04:51.078339 kubelet[2889]: I0707 09:04:51.073616 2889 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 09:04:51.078339 kubelet[2889]: I0707 09:04:51.073659 2889 server.go:1287] "Started kubelet" Jul 7 09:04:51.080011 kubelet[2889]: I0707 09:04:51.079984 2889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 09:04:51.087332 sudo[2903]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 09:04:51.088716 sudo[2903]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 09:04:51.095551 kubelet[2889]: I0707 09:04:51.095507 2889 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 09:04:51.101314 kubelet[2889]: I0707 09:04:51.099787 2889 server.go:479] "Adding debug handlers to kubelet server" Jul 7 09:04:51.101379 kubelet[2889]: I0707 09:04:51.101311 2889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 09:04:51.101615 kubelet[2889]: I0707 09:04:51.101591 2889 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 09:04:51.101859 kubelet[2889]: I0707 09:04:51.101835 2889 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 09:04:51.111902 kubelet[2889]: I0707 09:04:51.111370 2889 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 09:04:51.111902 kubelet[2889]: E0707 09:04:51.111659 2889 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-djpnf.gb1.brightbox.com\" not found" Jul 7 09:04:51.119363 kubelet[2889]: I0707 09:04:51.115825 2889 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 09:04:51.119363 kubelet[2889]: I0707 09:04:51.116022 2889 reconciler.go:26] "Reconciler: start to sync state" Jul 7 09:04:51.121277 kubelet[2889]: I0707 09:04:51.120220 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 09:04:51.122857 kubelet[2889]: I0707 09:04:51.122829 2889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 09:04:51.123019 kubelet[2889]: I0707 09:04:51.122879 2889 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 09:04:51.123019 kubelet[2889]: I0707 09:04:51.122914 2889 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 09:04:51.123019 kubelet[2889]: I0707 09:04:51.122927 2889 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 09:04:51.123019 kubelet[2889]: E0707 09:04:51.123002 2889 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 09:04:51.126075 kubelet[2889]: I0707 09:04:51.126034 2889 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 09:04:51.133196 kubelet[2889]: I0707 09:04:51.132819 2889 factory.go:221] Registration of the containerd container factory successfully Jul 7 09:04:51.133497 kubelet[2889]: I0707 09:04:51.133377 2889 factory.go:221] Registration of the systemd container factory successfully Jul 7 09:04:51.182442 kubelet[2889]: E0707 09:04:51.182400 2889 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 09:04:51.223334 kubelet[2889]: E0707 09:04:51.223069 2889 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 09:04:51.277711 kubelet[2889]: I0707 09:04:51.277610 2889 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 09:04:51.278360 kubelet[2889]: I0707 09:04:51.277977 2889 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 09:04:51.278360 kubelet[2889]: I0707 09:04:51.278004 2889 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:04:51.278858 kubelet[2889]: I0707 09:04:51.278747 2889 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 09:04:51.279383 kubelet[2889]: I0707 09:04:51.278771 2889 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 09:04:51.279383 kubelet[2889]: I0707 09:04:51.278982 2889 policy_none.go:49] "None policy: Start" Jul 7 09:04:51.279383 kubelet[2889]: I0707 09:04:51.278996 2889 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 09:04:51.279383 kubelet[2889]: I0707 09:04:51.279012 2889 state_mem.go:35] "Initializing new in-memory state store" Jul 7 09:04:51.279383 kubelet[2889]: I0707 09:04:51.279175 2889 state_mem.go:75] "Updated machine memory state" Jul 7 09:04:51.289375 kubelet[2889]: I0707 09:04:51.289100 2889 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 09:04:51.290620 kubelet[2889]: I0707 09:04:51.290164 2889 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 09:04:51.290620 kubelet[2889]: I0707 09:04:51.290195 2889 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 09:04:51.290746 kubelet[2889]: I0707 09:04:51.290686 2889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 09:04:51.301126 kubelet[2889]: E0707 09:04:51.301001 2889 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 09:04:51.412493 kubelet[2889]: I0707 09:04:51.412440 2889 kubelet_node_status.go:75] "Attempting to register node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.425147 kubelet[2889]: I0707 09:04:51.425118 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.430307 kubelet[2889]: I0707 09:04:51.429097 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.430408 kubelet[2889]: I0707 09:04:51.430333 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.435965 kubelet[2889]: I0707 09:04:51.435438 2889 kubelet_node_status.go:124] "Node was previously registered" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.435965 kubelet[2889]: I0707 09:04:51.435690 2889 kubelet_node_status.go:78] "Successfully registered node" node="srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.439090 kubelet[2889]: W0707 09:04:51.437747 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:04:51.439090 kubelet[2889]: E0707 09:04:51.437801 2889 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-djpnf.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.439090 kubelet[2889]: W0707 09:04:51.437871 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:04:51.442112 kubelet[2889]: W0707 09:04:51.442074 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:04:51.519451 kubelet[2889]: I0707 09:04:51.519289 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-flexvolume-dir\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.519733 kubelet[2889]: I0707 09:04:51.519621 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-k8s-certs\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.519733 kubelet[2889]: I0707 09:04:51.519701 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-kubeconfig\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.519877 kubelet[2889]: I0707 09:04:51.519737 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.519877 kubelet[2889]: I0707 09:04:51.519812 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/897169e64e52fd2b4e9fdb5e079787f6-ca-certs\") pod \"kube-apiserver-srv-djpnf.gb1.brightbox.com\" (UID: \"897169e64e52fd2b4e9fdb5e079787f6\") " pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.519963 kubelet[2889]: I0707 09:04:51.519908 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/897169e64e52fd2b4e9fdb5e079787f6-usr-share-ca-certificates\") pod \"kube-apiserver-srv-djpnf.gb1.brightbox.com\" (UID: \"897169e64e52fd2b4e9fdb5e079787f6\") " pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.520022 kubelet[2889]: I0707 09:04:51.519991 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad3719f0404ec11ea2ec88075eb7ec76-ca-certs\") pod \"kube-controller-manager-srv-djpnf.gb1.brightbox.com\" (UID: \"ad3719f0404ec11ea2ec88075eb7ec76\") " pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.520145 kubelet[2889]: I0707 09:04:51.520120 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/afe685dcab6c323a7093b2f14a71cd6d-kubeconfig\") pod \"kube-scheduler-srv-djpnf.gb1.brightbox.com\" (UID: \"afe685dcab6c323a7093b2f14a71cd6d\") " pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.520213 kubelet[2889]: I0707 09:04:51.520156 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/897169e64e52fd2b4e9fdb5e079787f6-k8s-certs\") pod \"kube-apiserver-srv-djpnf.gb1.brightbox.com\" (UID: \"897169e64e52fd2b4e9fdb5e079787f6\") " pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:51.875409 sudo[2903]: pam_unix(sudo:session): session closed for user root Jul 7 09:04:52.065198 kubelet[2889]: I0707 09:04:52.064871 2889 apiserver.go:52] "Watching apiserver" Jul 7 09:04:52.116314 kubelet[2889]: I0707 09:04:52.116220 2889 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 09:04:52.217075 kubelet[2889]: I0707 09:04:52.216734 2889 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:52.246311 kubelet[2889]: W0707 09:04:52.246239 2889 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:04:52.246750 kubelet[2889]: E0707 09:04:52.246560 2889 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-djpnf.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" Jul 7 09:04:52.307314 kubelet[2889]: I0707 09:04:52.307117 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-djpnf.gb1.brightbox.com" podStartSLOduration=3.307063874 podStartE2EDuration="3.307063874s" podCreationTimestamp="2025-07-07 09:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:04:52.27389547 +0000 UTC m=+1.317786501" watchObservedRunningTime="2025-07-07 09:04:52.307063874 +0000 UTC m=+1.350954906" Jul 7 09:04:52.323201 kubelet[2889]: I0707 09:04:52.322943 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-djpnf.gb1.brightbox.com" podStartSLOduration=1.322916673 podStartE2EDuration="1.322916673s" podCreationTimestamp="2025-07-07 09:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:04:52.307693594 +0000 UTC m=+1.351584636" watchObservedRunningTime="2025-07-07 09:04:52.322916673 +0000 UTC m=+1.366807702" Jul 7 09:04:52.337491 kubelet[2889]: I0707 09:04:52.337042 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-djpnf.gb1.brightbox.com" podStartSLOduration=1.337018867 podStartE2EDuration="1.337018867s" podCreationTimestamp="2025-07-07 09:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:04:52.323940812 +0000 UTC m=+1.367831838" watchObservedRunningTime="2025-07-07 09:04:52.337018867 +0000 UTC m=+1.380909889" Jul 7 09:04:54.227635 sudo[1902]: pam_unix(sudo:session): session closed for user root Jul 7 09:04:54.371496 sshd[1901]: Connection closed by 139.178.89.65 port 44090 Jul 7 09:04:54.373044 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Jul 7 09:04:54.378065 systemd[1]: sshd@8-10.230.11.74:22-139.178.89.65:44090.service: Deactivated successfully. Jul 7 09:04:54.382040 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 09:04:54.382877 systemd[1]: session-11.scope: Consumed 6.842s CPU time, 209.6M memory peak. Jul 7 09:04:54.387881 systemd-logind[1560]: Session 11 logged out. Waiting for processes to exit. Jul 7 09:04:54.389908 systemd-logind[1560]: Removed session 11. Jul 7 09:04:56.282220 kubelet[2889]: I0707 09:04:56.282160 2889 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 09:04:56.283239 containerd[1583]: time="2025-07-07T09:04:56.283101505Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 09:04:56.283769 kubelet[2889]: I0707 09:04:56.283645 2889 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 09:04:57.175980 systemd[1]: Created slice kubepods-besteffort-pod66947fb9_3995_4743_b7c0_0d46a9b4ac1b.slice - libcontainer container kubepods-besteffort-pod66947fb9_3995_4743_b7c0_0d46a9b4ac1b.slice. Jul 7 09:04:57.216342 systemd[1]: Created slice kubepods-burstable-podbeb9fb49_7e83_435b_9f1f_2c3683ebe059.slice - libcontainer container kubepods-burstable-podbeb9fb49_7e83_435b_9f1f_2c3683ebe059.slice. Jul 7 09:04:57.260331 kubelet[2889]: I0707 09:04:57.259780 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-bpf-maps\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260331 kubelet[2889]: I0707 09:04:57.259841 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-cgroup\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260331 kubelet[2889]: I0707 09:04:57.259872 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66947fb9-3995-4743-b7c0-0d46a9b4ac1b-xtables-lock\") pod \"kube-proxy-crmv8\" (UID: \"66947fb9-3995-4743-b7c0-0d46a9b4ac1b\") " pod="kube-system/kube-proxy-crmv8" Jul 7 09:04:57.260331 kubelet[2889]: I0707 09:04:57.259899 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkbs7\" (UniqueName: \"kubernetes.io/projected/66947fb9-3995-4743-b7c0-0d46a9b4ac1b-kube-api-access-kkbs7\") pod \"kube-proxy-crmv8\" (UID: \"66947fb9-3995-4743-b7c0-0d46a9b4ac1b\") " pod="kube-system/kube-proxy-crmv8" Jul 7 09:04:57.260331 kubelet[2889]: I0707 09:04:57.259929 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beb9fb49-7e83-435b-9f1f-2c3683ebe059-clustermesh-secrets\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260718 kubelet[2889]: I0707 09:04:57.259963 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h9rv\" (UniqueName: \"kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-kube-api-access-9h9rv\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260718 kubelet[2889]: I0707 09:04:57.259988 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-etc-cni-netd\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260718 kubelet[2889]: I0707 09:04:57.260017 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-config-path\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260718 kubelet[2889]: I0707 09:04:57.260041 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-kernel\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260718 kubelet[2889]: I0707 09:04:57.260064 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66947fb9-3995-4743-b7c0-0d46a9b4ac1b-lib-modules\") pod \"kube-proxy-crmv8\" (UID: \"66947fb9-3995-4743-b7c0-0d46a9b4ac1b\") " pod="kube-system/kube-proxy-crmv8" Jul 7 09:04:57.260924 kubelet[2889]: I0707 09:04:57.260093 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-run\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260924 kubelet[2889]: I0707 09:04:57.260119 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cni-path\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260924 kubelet[2889]: I0707 09:04:57.260142 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-lib-modules\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260924 kubelet[2889]: I0707 09:04:57.260177 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-xtables-lock\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260924 kubelet[2889]: I0707 09:04:57.260201 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hubble-tls\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.260924 kubelet[2889]: I0707 09:04:57.260227 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66947fb9-3995-4743-b7c0-0d46a9b4ac1b-kube-proxy\") pod \"kube-proxy-crmv8\" (UID: \"66947fb9-3995-4743-b7c0-0d46a9b4ac1b\") " pod="kube-system/kube-proxy-crmv8" Jul 7 09:04:57.261843 kubelet[2889]: I0707 09:04:57.260261 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hostproc\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.261843 kubelet[2889]: I0707 09:04:57.261832 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-net\") pod \"cilium-2zcgk\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " pod="kube-system/cilium-2zcgk" Jul 7 09:04:57.473148 kubelet[2889]: I0707 09:04:57.472459 2889 status_manager.go:890] "Failed to get status for pod" podUID="e88b9eb8-5e24-4b56-bc1e-840fa55f589a" pod="kube-system/cilium-operator-6c4d7847fc-zr6bp" err="pods \"cilium-operator-6c4d7847fc-zr6bp\" is forbidden: User \"system:node:srv-djpnf.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-djpnf.gb1.brightbox.com' and this object" Jul 7 09:04:57.475585 systemd[1]: Created slice kubepods-besteffort-pode88b9eb8_5e24_4b56_bc1e_840fa55f589a.slice - libcontainer container kubepods-besteffort-pode88b9eb8_5e24_4b56_bc1e_840fa55f589a.slice. Jul 7 09:04:57.494883 containerd[1583]: time="2025-07-07T09:04:57.494490976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crmv8,Uid:66947fb9-3995-4743-b7c0-0d46a9b4ac1b,Namespace:kube-system,Attempt:0,}" Jul 7 09:04:57.526246 containerd[1583]: time="2025-07-07T09:04:57.526192242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zcgk,Uid:beb9fb49-7e83-435b-9f1f-2c3683ebe059,Namespace:kube-system,Attempt:0,}" Jul 7 09:04:57.561104 containerd[1583]: time="2025-07-07T09:04:57.560957219Z" level=info msg="connecting to shim 0ca8db25789d2f2576cbc5b2cad0d348d5765955ba971305e82be2a24d278b8e" address="unix:///run/containerd/s/69ac4eab018ca5220000a43464c4a6b9f77ccb39a303ad99d4d52dae0c9d9c6e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:04:57.564992 kubelet[2889]: I0707 09:04:57.564886 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zr6bp\" (UID: \"e88b9eb8-5e24-4b56-bc1e-840fa55f589a\") " pod="kube-system/cilium-operator-6c4d7847fc-zr6bp" Jul 7 09:04:57.565558 kubelet[2889]: I0707 09:04:57.565388 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp24x\" (UniqueName: \"kubernetes.io/projected/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-kube-api-access-tp24x\") pod \"cilium-operator-6c4d7847fc-zr6bp\" (UID: \"e88b9eb8-5e24-4b56-bc1e-840fa55f589a\") " pod="kube-system/cilium-operator-6c4d7847fc-zr6bp" Jul 7 09:04:57.581336 containerd[1583]: time="2025-07-07T09:04:57.580862384Z" level=info msg="connecting to shim 598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744" address="unix:///run/containerd/s/7a3670dd215341ef455caacd3bbd6118dc320028b4432bae771c21ee84f9a0a8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:04:57.613552 systemd[1]: Started cri-containerd-0ca8db25789d2f2576cbc5b2cad0d348d5765955ba971305e82be2a24d278b8e.scope - libcontainer container 0ca8db25789d2f2576cbc5b2cad0d348d5765955ba971305e82be2a24d278b8e. Jul 7 09:04:57.630543 systemd[1]: Started cri-containerd-598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744.scope - libcontainer container 598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744. Jul 7 09:04:57.711554 containerd[1583]: time="2025-07-07T09:04:57.711505982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crmv8,Uid:66947fb9-3995-4743-b7c0-0d46a9b4ac1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ca8db25789d2f2576cbc5b2cad0d348d5765955ba971305e82be2a24d278b8e\"" Jul 7 09:04:57.714443 containerd[1583]: time="2025-07-07T09:04:57.714402706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zcgk,Uid:beb9fb49-7e83-435b-9f1f-2c3683ebe059,Namespace:kube-system,Attempt:0,} returns sandbox id \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\"" Jul 7 09:04:57.717745 containerd[1583]: time="2025-07-07T09:04:57.717620970Z" level=info msg="CreateContainer within sandbox \"0ca8db25789d2f2576cbc5b2cad0d348d5765955ba971305e82be2a24d278b8e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 09:04:57.719176 containerd[1583]: time="2025-07-07T09:04:57.719110399Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 09:04:57.734536 containerd[1583]: time="2025-07-07T09:04:57.734396900Z" level=info msg="Container 93ace663f249564804895e81024b6f2272adab34a453376037646e267fdb5783: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:04:57.742730 containerd[1583]: time="2025-07-07T09:04:57.742643325Z" level=info msg="CreateContainer within sandbox \"0ca8db25789d2f2576cbc5b2cad0d348d5765955ba971305e82be2a24d278b8e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"93ace663f249564804895e81024b6f2272adab34a453376037646e267fdb5783\"" Jul 7 09:04:57.743485 containerd[1583]: time="2025-07-07T09:04:57.743454098Z" level=info msg="StartContainer for \"93ace663f249564804895e81024b6f2272adab34a453376037646e267fdb5783\"" Jul 7 09:04:57.746325 containerd[1583]: time="2025-07-07T09:04:57.746265968Z" level=info msg="connecting to shim 93ace663f249564804895e81024b6f2272adab34a453376037646e267fdb5783" address="unix:///run/containerd/s/69ac4eab018ca5220000a43464c4a6b9f77ccb39a303ad99d4d52dae0c9d9c6e" protocol=ttrpc version=3 Jul 7 09:04:57.775500 systemd[1]: Started cri-containerd-93ace663f249564804895e81024b6f2272adab34a453376037646e267fdb5783.scope - libcontainer container 93ace663f249564804895e81024b6f2272adab34a453376037646e267fdb5783. Jul 7 09:04:57.782574 containerd[1583]: time="2025-07-07T09:04:57.782533000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zr6bp,Uid:e88b9eb8-5e24-4b56-bc1e-840fa55f589a,Namespace:kube-system,Attempt:0,}" Jul 7 09:04:57.806226 containerd[1583]: time="2025-07-07T09:04:57.805381814Z" level=info msg="connecting to shim fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618" address="unix:///run/containerd/s/d54bd349d6a223eb795a93bc152860ef801c451112605f03fc1d4c271aeb7c1a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:04:57.844499 systemd[1]: Started cri-containerd-fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618.scope - libcontainer container fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618. Jul 7 09:04:57.872875 containerd[1583]: time="2025-07-07T09:04:57.872801233Z" level=info msg="StartContainer for \"93ace663f249564804895e81024b6f2272adab34a453376037646e267fdb5783\" returns successfully" Jul 7 09:04:57.934576 containerd[1583]: time="2025-07-07T09:04:57.934527103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zr6bp,Uid:e88b9eb8-5e24-4b56-bc1e-840fa55f589a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\"" Jul 7 09:04:58.258960 kubelet[2889]: I0707 09:04:58.258082 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-crmv8" podStartSLOduration=1.258060175 podStartE2EDuration="1.258060175s" podCreationTimestamp="2025-07-07 09:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:04:58.257259633 +0000 UTC m=+7.301150662" watchObservedRunningTime="2025-07-07 09:04:58.258060175 +0000 UTC m=+7.301951199" Jul 7 09:05:04.698327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916300986.mount: Deactivated successfully. Jul 7 09:05:07.815132 containerd[1583]: time="2025-07-07T09:05:07.815071140Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:05:07.817241 containerd[1583]: time="2025-07-07T09:05:07.817186303Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 09:05:07.818530 containerd[1583]: time="2025-07-07T09:05:07.818473087Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:05:07.823548 containerd[1583]: time="2025-07-07T09:05:07.823504491Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.104325815s" Jul 7 09:05:07.823658 containerd[1583]: time="2025-07-07T09:05:07.823561202Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 09:05:07.827990 containerd[1583]: time="2025-07-07T09:05:07.826905260Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 09:05:07.828956 containerd[1583]: time="2025-07-07T09:05:07.828890826Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 09:05:07.850220 containerd[1583]: time="2025-07-07T09:05:07.850177322Z" level=info msg="Container 289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:07.857584 containerd[1583]: time="2025-07-07T09:05:07.857545269Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\"" Jul 7 09:05:07.858379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260226127.mount: Deactivated successfully. Jul 7 09:05:07.860137 containerd[1583]: time="2025-07-07T09:05:07.860054519Z" level=info msg="StartContainer for \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\"" Jul 7 09:05:07.862021 containerd[1583]: time="2025-07-07T09:05:07.861943188Z" level=info msg="connecting to shim 289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786" address="unix:///run/containerd/s/7a3670dd215341ef455caacd3bbd6118dc320028b4432bae771c21ee84f9a0a8" protocol=ttrpc version=3 Jul 7 09:05:07.894544 systemd[1]: Started cri-containerd-289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786.scope - libcontainer container 289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786. Jul 7 09:05:07.939995 containerd[1583]: time="2025-07-07T09:05:07.939893225Z" level=info msg="StartContainer for \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" returns successfully" Jul 7 09:05:07.954473 systemd[1]: cri-containerd-289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786.scope: Deactivated successfully. Jul 7 09:05:07.997549 containerd[1583]: time="2025-07-07T09:05:07.997490202Z" level=info msg="received exit event container_id:\"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" id:\"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" pid:3302 exited_at:{seconds:1751879107 nanos:959094088}" Jul 7 09:05:07.998772 containerd[1583]: time="2025-07-07T09:05:07.998734265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" id:\"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" pid:3302 exited_at:{seconds:1751879107 nanos:959094088}" Jul 7 09:05:08.028371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786-rootfs.mount: Deactivated successfully. Jul 7 09:05:08.391328 containerd[1583]: time="2025-07-07T09:05:08.390750543Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 09:05:08.414321 containerd[1583]: time="2025-07-07T09:05:08.414240267Z" level=info msg="Container 0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:08.422033 containerd[1583]: time="2025-07-07T09:05:08.421910394Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\"" Jul 7 09:05:08.424170 containerd[1583]: time="2025-07-07T09:05:08.422912037Z" level=info msg="StartContainer for \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\"" Jul 7 09:05:08.426315 containerd[1583]: time="2025-07-07T09:05:08.426119700Z" level=info msg="connecting to shim 0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d" address="unix:///run/containerd/s/7a3670dd215341ef455caacd3bbd6118dc320028b4432bae771c21ee84f9a0a8" protocol=ttrpc version=3 Jul 7 09:05:08.451523 systemd[1]: Started cri-containerd-0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d.scope - libcontainer container 0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d. Jul 7 09:05:08.498852 containerd[1583]: time="2025-07-07T09:05:08.498801254Z" level=info msg="StartContainer for \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" returns successfully" Jul 7 09:05:08.517728 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 09:05:08.518553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:05:08.519006 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:05:08.523001 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:05:08.526965 systemd[1]: cri-containerd-0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d.scope: Deactivated successfully. Jul 7 09:05:08.531266 containerd[1583]: time="2025-07-07T09:05:08.531079912Z" level=info msg="received exit event container_id:\"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" id:\"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" pid:3345 exited_at:{seconds:1751879108 nanos:530169499}" Jul 7 09:05:08.531956 containerd[1583]: time="2025-07-07T09:05:08.531185816Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" id:\"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" pid:3345 exited_at:{seconds:1751879108 nanos:530169499}" Jul 7 09:05:08.568551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:05:09.404494 containerd[1583]: time="2025-07-07T09:05:09.402284321Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 09:05:09.482317 containerd[1583]: time="2025-07-07T09:05:09.480015719Z" level=info msg="Container 8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:09.503338 containerd[1583]: time="2025-07-07T09:05:09.503256141Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\"" Jul 7 09:05:09.504488 containerd[1583]: time="2025-07-07T09:05:09.504451187Z" level=info msg="StartContainer for \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\"" Jul 7 09:05:09.509314 containerd[1583]: time="2025-07-07T09:05:09.507854776Z" level=info msg="connecting to shim 8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71" address="unix:///run/containerd/s/7a3670dd215341ef455caacd3bbd6118dc320028b4432bae771c21ee84f9a0a8" protocol=ttrpc version=3 Jul 7 09:05:09.547560 systemd[1]: Started cri-containerd-8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71.scope - libcontainer container 8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71. Jul 7 09:05:09.618498 systemd[1]: cri-containerd-8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71.scope: Deactivated successfully. Jul 7 09:05:09.624078 containerd[1583]: time="2025-07-07T09:05:09.623957321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" id:\"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" pid:3394 exited_at:{seconds:1751879109 nanos:622772033}" Jul 7 09:05:09.624181 containerd[1583]: time="2025-07-07T09:05:09.624144295Z" level=info msg="received exit event container_id:\"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" id:\"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" pid:3394 exited_at:{seconds:1751879109 nanos:622772033}" Jul 7 09:05:09.636199 containerd[1583]: time="2025-07-07T09:05:09.636149181Z" level=info msg="StartContainer for \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" returns successfully" Jul 7 09:05:09.656943 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71-rootfs.mount: Deactivated successfully. Jul 7 09:05:10.073273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265695482.mount: Deactivated successfully. Jul 7 09:05:10.416116 containerd[1583]: time="2025-07-07T09:05:10.415956796Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 09:05:10.489175 containerd[1583]: time="2025-07-07T09:05:10.489126159Z" level=info msg="Container d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:10.491544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396079908.mount: Deactivated successfully. Jul 7 09:05:10.504119 containerd[1583]: time="2025-07-07T09:05:10.504062227Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\"" Jul 7 09:05:10.506086 containerd[1583]: time="2025-07-07T09:05:10.506023333Z" level=info msg="StartContainer for \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\"" Jul 7 09:05:10.508706 containerd[1583]: time="2025-07-07T09:05:10.507742023Z" level=info msg="connecting to shim d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee" address="unix:///run/containerd/s/7a3670dd215341ef455caacd3bbd6118dc320028b4432bae771c21ee84f9a0a8" protocol=ttrpc version=3 Jul 7 09:05:10.560521 systemd[1]: Started cri-containerd-d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee.scope - libcontainer container d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee. Jul 7 09:05:10.624405 systemd[1]: cri-containerd-d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee.scope: Deactivated successfully. Jul 7 09:05:10.627944 containerd[1583]: time="2025-07-07T09:05:10.627786515Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" id:\"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" pid:3447 exited_at:{seconds:1751879110 nanos:624792661}" Jul 7 09:05:10.630084 containerd[1583]: time="2025-07-07T09:05:10.630030403Z" level=info msg="received exit event container_id:\"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" id:\"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" pid:3447 exited_at:{seconds:1751879110 nanos:624792661}" Jul 7 09:05:10.659886 containerd[1583]: time="2025-07-07T09:05:10.659834191Z" level=info msg="StartContainer for \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" returns successfully" Jul 7 09:05:10.999250 containerd[1583]: time="2025-07-07T09:05:10.999185471Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:05:11.001165 containerd[1583]: time="2025-07-07T09:05:11.001119880Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 09:05:11.002794 containerd[1583]: time="2025-07-07T09:05:11.002721648Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:05:11.005657 containerd[1583]: time="2025-07-07T09:05:11.005110071Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.178154307s" Jul 7 09:05:11.005657 containerd[1583]: time="2025-07-07T09:05:11.005155491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 09:05:11.009542 containerd[1583]: time="2025-07-07T09:05:11.009484161Z" level=info msg="CreateContainer within sandbox \"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 09:05:11.027153 containerd[1583]: time="2025-07-07T09:05:11.026439546Z" level=info msg="Container 856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:11.041648 containerd[1583]: time="2025-07-07T09:05:11.041611630Z" level=info msg="CreateContainer within sandbox \"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\"" Jul 7 09:05:11.042815 containerd[1583]: time="2025-07-07T09:05:11.042787073Z" level=info msg="StartContainer for \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\"" Jul 7 09:05:11.044228 containerd[1583]: time="2025-07-07T09:05:11.044195783Z" level=info msg="connecting to shim 856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4" address="unix:///run/containerd/s/d54bd349d6a223eb795a93bc152860ef801c451112605f03fc1d4c271aeb7c1a" protocol=ttrpc version=3 Jul 7 09:05:11.062325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee-rootfs.mount: Deactivated successfully. Jul 7 09:05:11.076567 systemd[1]: Started cri-containerd-856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4.scope - libcontainer container 856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4. Jul 7 09:05:11.139723 containerd[1583]: time="2025-07-07T09:05:11.139671284Z" level=info msg="StartContainer for \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" returns successfully" Jul 7 09:05:11.432386 containerd[1583]: time="2025-07-07T09:05:11.432251343Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 09:05:11.459443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998353857.mount: Deactivated successfully. Jul 7 09:05:11.460062 containerd[1583]: time="2025-07-07T09:05:11.460013992Z" level=info msg="Container 014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:11.504764 containerd[1583]: time="2025-07-07T09:05:11.504715835Z" level=info msg="CreateContainer within sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\"" Jul 7 09:05:11.511324 containerd[1583]: time="2025-07-07T09:05:11.508009705Z" level=info msg="StartContainer for \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\"" Jul 7 09:05:11.511324 containerd[1583]: time="2025-07-07T09:05:11.509216423Z" level=info msg="connecting to shim 014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819" address="unix:///run/containerd/s/7a3670dd215341ef455caacd3bbd6118dc320028b4432bae771c21ee84f9a0a8" protocol=ttrpc version=3 Jul 7 09:05:11.560586 systemd[1]: Started cri-containerd-014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819.scope - libcontainer container 014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819. Jul 7 09:05:11.648326 kubelet[2889]: I0707 09:05:11.647602 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zr6bp" podStartSLOduration=1.5766701090000002 podStartE2EDuration="14.647259647s" podCreationTimestamp="2025-07-07 09:04:57 +0000 UTC" firstStartedPulling="2025-07-07 09:04:57.936103544 +0000 UTC m=+6.979994561" lastFinishedPulling="2025-07-07 09:05:11.006693077 +0000 UTC m=+20.050584099" observedRunningTime="2025-07-07 09:05:11.578881908 +0000 UTC m=+20.622772969" watchObservedRunningTime="2025-07-07 09:05:11.647259647 +0000 UTC m=+20.691150671" Jul 7 09:05:11.676200 containerd[1583]: time="2025-07-07T09:05:11.676146063Z" level=info msg="StartContainer for \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" returns successfully" Jul 7 09:05:12.096304 containerd[1583]: time="2025-07-07T09:05:12.096236566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" id:\"33c0c4ea847e7244f80c3fdabc14b42b9a957f390ad87d6788d1f1f396adf7c3\" pid:3552 exited_at:{seconds:1751879112 nanos:95895563}" Jul 7 09:05:12.137312 kubelet[2889]: I0707 09:05:12.137166 2889 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 09:05:12.299325 systemd[1]: Created slice kubepods-burstable-pod5602e730_4050_47f2_acd9_54278388a80a.slice - libcontainer container kubepods-burstable-pod5602e730_4050_47f2_acd9_54278388a80a.slice. Jul 7 09:05:12.315258 systemd[1]: Created slice kubepods-burstable-podf7a422b1_ac73_4bd4_b956_f9f279fef0fc.slice - libcontainer container kubepods-burstable-podf7a422b1_ac73_4bd4_b956_f9f279fef0fc.slice. Jul 7 09:05:12.389467 kubelet[2889]: I0707 09:05:12.388968 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5602e730-4050-47f2-acd9-54278388a80a-config-volume\") pod \"coredns-668d6bf9bc-k9zrt\" (UID: \"5602e730-4050-47f2-acd9-54278388a80a\") " pod="kube-system/coredns-668d6bf9bc-k9zrt" Jul 7 09:05:12.389467 kubelet[2889]: I0707 09:05:12.389028 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7a422b1-ac73-4bd4-b956-f9f279fef0fc-config-volume\") pod \"coredns-668d6bf9bc-cqwf5\" (UID: \"f7a422b1-ac73-4bd4-b956-f9f279fef0fc\") " pod="kube-system/coredns-668d6bf9bc-cqwf5" Jul 7 09:05:12.389467 kubelet[2889]: I0707 09:05:12.389064 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r75bc\" (UniqueName: \"kubernetes.io/projected/f7a422b1-ac73-4bd4-b956-f9f279fef0fc-kube-api-access-r75bc\") pod \"coredns-668d6bf9bc-cqwf5\" (UID: \"f7a422b1-ac73-4bd4-b956-f9f279fef0fc\") " pod="kube-system/coredns-668d6bf9bc-cqwf5" Jul 7 09:05:12.389467 kubelet[2889]: I0707 09:05:12.389097 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd2vl\" (UniqueName: \"kubernetes.io/projected/5602e730-4050-47f2-acd9-54278388a80a-kube-api-access-kd2vl\") pod \"coredns-668d6bf9bc-k9zrt\" (UID: \"5602e730-4050-47f2-acd9-54278388a80a\") " pod="kube-system/coredns-668d6bf9bc-k9zrt" Jul 7 09:05:12.392040 kubelet[2889]: W0707 09:05:12.391486 2889 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-djpnf.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-djpnf.gb1.brightbox.com' and this object Jul 7 09:05:12.392040 kubelet[2889]: E0707 09:05:12.391567 2889 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-djpnf.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-djpnf.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 09:05:12.392040 kubelet[2889]: I0707 09:05:12.391632 2889 status_manager.go:890] "Failed to get status for pod" podUID="5602e730-4050-47f2-acd9-54278388a80a" pod="kube-system/coredns-668d6bf9bc-k9zrt" err="pods \"coredns-668d6bf9bc-k9zrt\" is forbidden: User \"system:node:srv-djpnf.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-djpnf.gb1.brightbox.com' and this object" Jul 7 09:05:13.509523 containerd[1583]: time="2025-07-07T09:05:13.509384628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9zrt,Uid:5602e730-4050-47f2-acd9-54278388a80a,Namespace:kube-system,Attempt:0,}" Jul 7 09:05:13.522932 containerd[1583]: time="2025-07-07T09:05:13.522851917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cqwf5,Uid:f7a422b1-ac73-4bd4-b956-f9f279fef0fc,Namespace:kube-system,Attempt:0,}" Jul 7 09:05:14.836581 systemd-networkd[1513]: cilium_host: Link UP Jul 7 09:05:14.836869 systemd-networkd[1513]: cilium_net: Link UP Jul 7 09:05:14.838391 systemd-networkd[1513]: cilium_net: Gained carrier Jul 7 09:05:14.838763 systemd-networkd[1513]: cilium_host: Gained carrier Jul 7 09:05:14.997417 systemd-networkd[1513]: cilium_net: Gained IPv6LL Jul 7 09:05:15.002172 systemd-networkd[1513]: cilium_vxlan: Link UP Jul 7 09:05:15.002186 systemd-networkd[1513]: cilium_vxlan: Gained carrier Jul 7 09:05:15.647334 kernel: NET: Registered PF_ALG protocol family Jul 7 09:05:15.677718 systemd-networkd[1513]: cilium_host: Gained IPv6LL Jul 7 09:05:16.638486 systemd-networkd[1513]: cilium_vxlan: Gained IPv6LL Jul 7 09:05:16.719531 systemd-networkd[1513]: lxc_health: Link UP Jul 7 09:05:16.726187 systemd-networkd[1513]: lxc_health: Gained carrier Jul 7 09:05:17.123926 kernel: eth0: renamed from tmpf9811 Jul 7 09:05:17.123082 systemd-networkd[1513]: lxca3ce742de577: Link UP Jul 7 09:05:17.123729 systemd-networkd[1513]: lxc7cd91de9a0b4: Link UP Jul 7 09:05:17.126261 systemd-networkd[1513]: lxca3ce742de577: Gained carrier Jul 7 09:05:17.132430 kernel: eth0: renamed from tmp013e4 Jul 7 09:05:17.135493 systemd-networkd[1513]: lxc7cd91de9a0b4: Gained carrier Jul 7 09:05:17.567312 kubelet[2889]: I0707 09:05:17.567165 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2zcgk" podStartSLOduration=10.460093338 podStartE2EDuration="20.567099964s" podCreationTimestamp="2025-07-07 09:04:57 +0000 UTC" firstStartedPulling="2025-07-07 09:04:57.718640103 +0000 UTC m=+6.762531126" lastFinishedPulling="2025-07-07 09:05:07.825646721 +0000 UTC m=+16.869537752" observedRunningTime="2025-07-07 09:05:12.519533936 +0000 UTC m=+21.563424977" watchObservedRunningTime="2025-07-07 09:05:17.567099964 +0000 UTC m=+26.610990986" Jul 7 09:05:17.981574 systemd-networkd[1513]: lxc_health: Gained IPv6LL Jul 7 09:05:18.667318 kubelet[2889]: I0707 09:05:18.666911 2889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 09:05:19.006691 systemd-networkd[1513]: lxca3ce742de577: Gained IPv6LL Jul 7 09:05:19.069691 systemd-networkd[1513]: lxc7cd91de9a0b4: Gained IPv6LL Jul 7 09:05:22.899044 containerd[1583]: time="2025-07-07T09:05:22.898917656Z" level=info msg="connecting to shim f9811007f2661e243d8ad929cc2b3112bea73542c19ae4425bef4459fb59e632" address="unix:///run/containerd/s/6ddcaa743d08d3c43c074f1b8f9b5622bc732dd8fd8f496f9441f8e5efe8aca3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:05:22.917929 containerd[1583]: time="2025-07-07T09:05:22.917832921Z" level=info msg="connecting to shim 013e4571fb003b708f8102bd928bbeef4660cddd51d8b5e5e213e9f0ab4c729b" address="unix:///run/containerd/s/d305ad4d72aff2cdc27f840bd23707f7f9b904f6e20b1eb3afb7e71daed1fa29" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:05:22.971736 systemd[1]: Started cri-containerd-f9811007f2661e243d8ad929cc2b3112bea73542c19ae4425bef4459fb59e632.scope - libcontainer container f9811007f2661e243d8ad929cc2b3112bea73542c19ae4425bef4459fb59e632. Jul 7 09:05:22.986191 systemd[1]: Started cri-containerd-013e4571fb003b708f8102bd928bbeef4660cddd51d8b5e5e213e9f0ab4c729b.scope - libcontainer container 013e4571fb003b708f8102bd928bbeef4660cddd51d8b5e5e213e9f0ab4c729b. Jul 7 09:05:23.113488 containerd[1583]: time="2025-07-07T09:05:23.113427868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cqwf5,Uid:f7a422b1-ac73-4bd4-b956-f9f279fef0fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"013e4571fb003b708f8102bd928bbeef4660cddd51d8b5e5e213e9f0ab4c729b\"" Jul 7 09:05:23.120185 containerd[1583]: time="2025-07-07T09:05:23.120154718Z" level=info msg="CreateContainer within sandbox \"013e4571fb003b708f8102bd928bbeef4660cddd51d8b5e5e213e9f0ab4c729b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 09:05:23.139544 containerd[1583]: time="2025-07-07T09:05:23.139369465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k9zrt,Uid:5602e730-4050-47f2-acd9-54278388a80a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9811007f2661e243d8ad929cc2b3112bea73542c19ae4425bef4459fb59e632\"" Jul 7 09:05:23.149122 containerd[1583]: time="2025-07-07T09:05:23.148413682Z" level=info msg="CreateContainer within sandbox \"f9811007f2661e243d8ad929cc2b3112bea73542c19ae4425bef4459fb59e632\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 09:05:23.149594 containerd[1583]: time="2025-07-07T09:05:23.149552925Z" level=info msg="Container 56e2ae7d572f04a731b7da3934ac1922cd1bde85eb12294d1ebdefba6e30d95b: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:23.159432 containerd[1583]: time="2025-07-07T09:05:23.159398350Z" level=info msg="CreateContainer within sandbox \"013e4571fb003b708f8102bd928bbeef4660cddd51d8b5e5e213e9f0ab4c729b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56e2ae7d572f04a731b7da3934ac1922cd1bde85eb12294d1ebdefba6e30d95b\"" Jul 7 09:05:23.160895 containerd[1583]: time="2025-07-07T09:05:23.160862786Z" level=info msg="StartContainer for \"56e2ae7d572f04a731b7da3934ac1922cd1bde85eb12294d1ebdefba6e30d95b\"" Jul 7 09:05:23.161410 containerd[1583]: time="2025-07-07T09:05:23.161370564Z" level=info msg="Container 4367ec43dbd6b3d3e861adb2a5f7e71eafb4efdfa149dd77d5ec0bf2f8d2fe6f: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:05:23.163056 containerd[1583]: time="2025-07-07T09:05:23.163010715Z" level=info msg="connecting to shim 56e2ae7d572f04a731b7da3934ac1922cd1bde85eb12294d1ebdefba6e30d95b" address="unix:///run/containerd/s/d305ad4d72aff2cdc27f840bd23707f7f9b904f6e20b1eb3afb7e71daed1fa29" protocol=ttrpc version=3 Jul 7 09:05:23.169790 containerd[1583]: time="2025-07-07T09:05:23.169757807Z" level=info msg="CreateContainer within sandbox \"f9811007f2661e243d8ad929cc2b3112bea73542c19ae4425bef4459fb59e632\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4367ec43dbd6b3d3e861adb2a5f7e71eafb4efdfa149dd77d5ec0bf2f8d2fe6f\"" Jul 7 09:05:23.171712 containerd[1583]: time="2025-07-07T09:05:23.171500883Z" level=info msg="StartContainer for \"4367ec43dbd6b3d3e861adb2a5f7e71eafb4efdfa149dd77d5ec0bf2f8d2fe6f\"" Jul 7 09:05:23.174822 containerd[1583]: time="2025-07-07T09:05:23.174779194Z" level=info msg="connecting to shim 4367ec43dbd6b3d3e861adb2a5f7e71eafb4efdfa149dd77d5ec0bf2f8d2fe6f" address="unix:///run/containerd/s/6ddcaa743d08d3c43c074f1b8f9b5622bc732dd8fd8f496f9441f8e5efe8aca3" protocol=ttrpc version=3 Jul 7 09:05:23.195551 systemd[1]: Started cri-containerd-56e2ae7d572f04a731b7da3934ac1922cd1bde85eb12294d1ebdefba6e30d95b.scope - libcontainer container 56e2ae7d572f04a731b7da3934ac1922cd1bde85eb12294d1ebdefba6e30d95b. Jul 7 09:05:23.213529 systemd[1]: Started cri-containerd-4367ec43dbd6b3d3e861adb2a5f7e71eafb4efdfa149dd77d5ec0bf2f8d2fe6f.scope - libcontainer container 4367ec43dbd6b3d3e861adb2a5f7e71eafb4efdfa149dd77d5ec0bf2f8d2fe6f. Jul 7 09:05:23.267758 containerd[1583]: time="2025-07-07T09:05:23.267673931Z" level=info msg="StartContainer for \"56e2ae7d572f04a731b7da3934ac1922cd1bde85eb12294d1ebdefba6e30d95b\" returns successfully" Jul 7 09:05:23.288586 containerd[1583]: time="2025-07-07T09:05:23.288533210Z" level=info msg="StartContainer for \"4367ec43dbd6b3d3e861adb2a5f7e71eafb4efdfa149dd77d5ec0bf2f8d2fe6f\" returns successfully" Jul 7 09:05:23.585730 kubelet[2889]: I0707 09:05:23.585656 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k9zrt" podStartSLOduration=26.585601862 podStartE2EDuration="26.585601862s" podCreationTimestamp="2025-07-07 09:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:05:23.584462157 +0000 UTC m=+32.628353191" watchObservedRunningTime="2025-07-07 09:05:23.585601862 +0000 UTC m=+32.629492885" Jul 7 09:05:23.588024 kubelet[2889]: I0707 09:05:23.586664 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cqwf5" podStartSLOduration=26.58665332 podStartE2EDuration="26.58665332s" podCreationTimestamp="2025-07-07 09:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:05:23.550719018 +0000 UTC m=+32.594610062" watchObservedRunningTime="2025-07-07 09:05:23.58665332 +0000 UTC m=+32.630544359" Jul 7 09:05:23.881621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928917191.mount: Deactivated successfully. Jul 7 09:06:10.930655 systemd[1]: Started sshd@9-10.230.11.74:22-139.178.89.65:59122.service - OpenSSH per-connection server daemon (139.178.89.65:59122). Jul 7 09:06:11.878125 sshd[4215]: Accepted publickey for core from 139.178.89.65 port 59122 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:11.880818 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:11.894945 systemd-logind[1560]: New session 12 of user core. Jul 7 09:06:11.902686 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 09:06:13.090315 sshd[4218]: Connection closed by 139.178.89.65 port 59122 Jul 7 09:06:13.089262 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:13.095622 systemd-logind[1560]: Session 12 logged out. Waiting for processes to exit. Jul 7 09:06:13.096686 systemd[1]: sshd@9-10.230.11.74:22-139.178.89.65:59122.service: Deactivated successfully. Jul 7 09:06:13.098856 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 09:06:13.101900 systemd-logind[1560]: Removed session 12. Jul 7 09:06:18.248726 systemd[1]: Started sshd@10-10.230.11.74:22-139.178.89.65:59134.service - OpenSSH per-connection server daemon (139.178.89.65:59134). Jul 7 09:06:19.204529 sshd[4231]: Accepted publickey for core from 139.178.89.65 port 59134 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:19.205689 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:19.213870 systemd-logind[1560]: New session 13 of user core. Jul 7 09:06:19.219549 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 09:06:20.000320 sshd[4233]: Connection closed by 139.178.89.65 port 59134 Jul 7 09:06:19.999563 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:20.007912 systemd[1]: sshd@10-10.230.11.74:22-139.178.89.65:59134.service: Deactivated successfully. Jul 7 09:06:20.012940 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 09:06:20.016589 systemd-logind[1560]: Session 13 logged out. Waiting for processes to exit. Jul 7 09:06:20.019083 systemd-logind[1560]: Removed session 13. Jul 7 09:06:25.158818 systemd[1]: Started sshd@11-10.230.11.74:22-139.178.89.65:50806.service - OpenSSH per-connection server daemon (139.178.89.65:50806). Jul 7 09:06:26.065229 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 50806 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:26.067138 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:26.075553 systemd-logind[1560]: New session 14 of user core. Jul 7 09:06:26.082500 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 09:06:26.776379 sshd[4247]: Connection closed by 139.178.89.65 port 50806 Jul 7 09:06:26.777214 sshd-session[4245]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:26.782277 systemd[1]: sshd@11-10.230.11.74:22-139.178.89.65:50806.service: Deactivated successfully. Jul 7 09:06:26.784920 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 09:06:26.786597 systemd-logind[1560]: Session 14 logged out. Waiting for processes to exit. Jul 7 09:06:26.788597 systemd-logind[1560]: Removed session 14. Jul 7 09:06:31.938810 systemd[1]: Started sshd@12-10.230.11.74:22-139.178.89.65:42968.service - OpenSSH per-connection server daemon (139.178.89.65:42968). Jul 7 09:06:32.859940 sshd[4261]: Accepted publickey for core from 139.178.89.65 port 42968 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:32.862111 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:32.869991 systemd-logind[1560]: New session 15 of user core. Jul 7 09:06:32.884680 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 09:06:33.562976 sshd[4263]: Connection closed by 139.178.89.65 port 42968 Jul 7 09:06:33.562730 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:33.568953 systemd[1]: sshd@12-10.230.11.74:22-139.178.89.65:42968.service: Deactivated successfully. Jul 7 09:06:33.573121 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 09:06:33.574974 systemd-logind[1560]: Session 15 logged out. Waiting for processes to exit. Jul 7 09:06:33.578172 systemd-logind[1560]: Removed session 15. Jul 7 09:06:33.721244 systemd[1]: Started sshd@13-10.230.11.74:22-139.178.89.65:42982.service - OpenSSH per-connection server daemon (139.178.89.65:42982). Jul 7 09:06:34.623391 sshd[4276]: Accepted publickey for core from 139.178.89.65 port 42982 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:34.626088 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:34.633952 systemd-logind[1560]: New session 16 of user core. Jul 7 09:06:34.644538 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 09:06:35.384110 sshd[4278]: Connection closed by 139.178.89.65 port 42982 Jul 7 09:06:35.383968 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:35.388784 systemd-logind[1560]: Session 16 logged out. Waiting for processes to exit. Jul 7 09:06:35.389847 systemd[1]: sshd@13-10.230.11.74:22-139.178.89.65:42982.service: Deactivated successfully. Jul 7 09:06:35.392917 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 09:06:35.395902 systemd-logind[1560]: Removed session 16. Jul 7 09:06:35.541384 systemd[1]: Started sshd@14-10.230.11.74:22-139.178.89.65:42988.service - OpenSSH per-connection server daemon (139.178.89.65:42988). Jul 7 09:06:36.453249 sshd[4288]: Accepted publickey for core from 139.178.89.65 port 42988 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:36.455307 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:36.463633 systemd-logind[1560]: New session 17 of user core. Jul 7 09:06:36.469511 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 09:06:37.160549 sshd[4290]: Connection closed by 139.178.89.65 port 42988 Jul 7 09:06:37.161499 sshd-session[4288]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:37.167303 systemd[1]: sshd@14-10.230.11.74:22-139.178.89.65:42988.service: Deactivated successfully. Jul 7 09:06:37.170034 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 09:06:37.171923 systemd-logind[1560]: Session 17 logged out. Waiting for processes to exit. Jul 7 09:06:37.174094 systemd-logind[1560]: Removed session 17. Jul 7 09:06:42.317596 systemd[1]: Started sshd@15-10.230.11.74:22-139.178.89.65:54682.service - OpenSSH per-connection server daemon (139.178.89.65:54682). Jul 7 09:06:43.257201 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 54682 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:43.259216 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:43.267123 systemd-logind[1560]: New session 18 of user core. Jul 7 09:06:43.276514 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 09:06:43.966855 sshd[4304]: Connection closed by 139.178.89.65 port 54682 Jul 7 09:06:43.966719 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:43.972173 systemd-logind[1560]: Session 18 logged out. Waiting for processes to exit. Jul 7 09:06:43.972550 systemd[1]: sshd@15-10.230.11.74:22-139.178.89.65:54682.service: Deactivated successfully. Jul 7 09:06:43.975123 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 09:06:43.978003 systemd-logind[1560]: Removed session 18. Jul 7 09:06:49.130530 systemd[1]: Started sshd@16-10.230.11.74:22-139.178.89.65:54692.service - OpenSSH per-connection server daemon (139.178.89.65:54692). Jul 7 09:06:50.046777 sshd[4317]: Accepted publickey for core from 139.178.89.65 port 54692 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:50.049941 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:50.058357 systemd-logind[1560]: New session 19 of user core. Jul 7 09:06:50.066591 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 09:06:50.749459 sshd[4319]: Connection closed by 139.178.89.65 port 54692 Jul 7 09:06:50.749284 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:50.754726 systemd[1]: sshd@16-10.230.11.74:22-139.178.89.65:54692.service: Deactivated successfully. Jul 7 09:06:50.757232 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 09:06:50.760418 systemd-logind[1560]: Session 19 logged out. Waiting for processes to exit. Jul 7 09:06:50.761910 systemd-logind[1560]: Removed session 19. Jul 7 09:06:50.905755 systemd[1]: Started sshd@17-10.230.11.74:22-139.178.89.65:41740.service - OpenSSH per-connection server daemon (139.178.89.65:41740). Jul 7 09:06:51.822040 sshd[4331]: Accepted publickey for core from 139.178.89.65 port 41740 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:51.823937 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:51.835355 systemd-logind[1560]: New session 20 of user core. Jul 7 09:06:51.842547 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 09:06:52.856702 sshd[4335]: Connection closed by 139.178.89.65 port 41740 Jul 7 09:06:52.857630 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:52.873829 systemd[1]: sshd@17-10.230.11.74:22-139.178.89.65:41740.service: Deactivated successfully. Jul 7 09:06:52.877608 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 09:06:52.881430 systemd-logind[1560]: Session 20 logged out. Waiting for processes to exit. Jul 7 09:06:52.883740 systemd-logind[1560]: Removed session 20. Jul 7 09:06:53.012749 systemd[1]: Started sshd@18-10.230.11.74:22-139.178.89.65:41748.service - OpenSSH per-connection server daemon (139.178.89.65:41748). Jul 7 09:06:53.957864 sshd[4346]: Accepted publickey for core from 139.178.89.65 port 41748 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:53.959895 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:53.969404 systemd-logind[1560]: New session 21 of user core. Jul 7 09:06:53.975536 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 09:06:55.803418 sshd[4348]: Connection closed by 139.178.89.65 port 41748 Jul 7 09:06:55.804776 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:55.810347 systemd[1]: sshd@18-10.230.11.74:22-139.178.89.65:41748.service: Deactivated successfully. Jul 7 09:06:55.815872 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 09:06:55.818164 systemd-logind[1560]: Session 21 logged out. Waiting for processes to exit. Jul 7 09:06:55.821688 systemd-logind[1560]: Removed session 21. Jul 7 09:06:55.960079 systemd[1]: Started sshd@19-10.230.11.74:22-139.178.89.65:41756.service - OpenSSH per-connection server daemon (139.178.89.65:41756). Jul 7 09:06:56.876220 sshd[4365]: Accepted publickey for core from 139.178.89.65 port 41756 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:56.878111 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:56.885176 systemd-logind[1560]: New session 22 of user core. Jul 7 09:06:56.896596 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 09:06:57.807344 sshd[4367]: Connection closed by 139.178.89.65 port 41756 Jul 7 09:06:57.808128 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:57.814902 systemd[1]: sshd@19-10.230.11.74:22-139.178.89.65:41756.service: Deactivated successfully. Jul 7 09:06:57.818138 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 09:06:57.819801 systemd-logind[1560]: Session 22 logged out. Waiting for processes to exit. Jul 7 09:06:57.822025 systemd-logind[1560]: Removed session 22. Jul 7 09:06:57.964477 systemd[1]: Started sshd@20-10.230.11.74:22-139.178.89.65:41770.service - OpenSSH per-connection server daemon (139.178.89.65:41770). Jul 7 09:06:58.871684 sshd[4377]: Accepted publickey for core from 139.178.89.65 port 41770 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:06:58.873802 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:06:58.880590 systemd-logind[1560]: New session 23 of user core. Jul 7 09:06:58.889559 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 09:06:59.575036 sshd[4381]: Connection closed by 139.178.89.65 port 41770 Jul 7 09:06:59.575903 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Jul 7 09:06:59.581074 systemd[1]: sshd@20-10.230.11.74:22-139.178.89.65:41770.service: Deactivated successfully. Jul 7 09:06:59.583435 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 09:06:59.584971 systemd-logind[1560]: Session 23 logged out. Waiting for processes to exit. Jul 7 09:06:59.587376 systemd-logind[1560]: Removed session 23. Jul 7 09:07:04.734642 systemd[1]: Started sshd@21-10.230.11.74:22-139.178.89.65:52854.service - OpenSSH per-connection server daemon (139.178.89.65:52854). Jul 7 09:07:05.637827 sshd[4394]: Accepted publickey for core from 139.178.89.65 port 52854 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:07:05.640030 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:07:05.648341 systemd-logind[1560]: New session 24 of user core. Jul 7 09:07:05.660633 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 09:07:06.342424 sshd[4396]: Connection closed by 139.178.89.65 port 52854 Jul 7 09:07:06.343405 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Jul 7 09:07:06.348881 systemd[1]: sshd@21-10.230.11.74:22-139.178.89.65:52854.service: Deactivated successfully. Jul 7 09:07:06.352133 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 09:07:06.353773 systemd-logind[1560]: Session 24 logged out. Waiting for processes to exit. Jul 7 09:07:06.355986 systemd-logind[1560]: Removed session 24. Jul 7 09:07:11.501617 systemd[1]: Started sshd@22-10.230.11.74:22-139.178.89.65:47752.service - OpenSSH per-connection server daemon (139.178.89.65:47752). Jul 7 09:07:12.401565 sshd[4408]: Accepted publickey for core from 139.178.89.65 port 47752 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:07:12.403505 sshd-session[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:07:12.410344 systemd-logind[1560]: New session 25 of user core. Jul 7 09:07:12.416550 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 09:07:13.102343 sshd[4410]: Connection closed by 139.178.89.65 port 47752 Jul 7 09:07:13.103073 sshd-session[4408]: pam_unix(sshd:session): session closed for user core Jul 7 09:07:13.110554 systemd[1]: sshd@22-10.230.11.74:22-139.178.89.65:47752.service: Deactivated successfully. Jul 7 09:07:13.114000 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 09:07:13.116080 systemd-logind[1560]: Session 25 logged out. Waiting for processes to exit. Jul 7 09:07:13.118241 systemd-logind[1560]: Removed session 25. Jul 7 09:07:18.259868 systemd[1]: Started sshd@23-10.230.11.74:22-139.178.89.65:47768.service - OpenSSH per-connection server daemon (139.178.89.65:47768). Jul 7 09:07:19.167484 sshd[4422]: Accepted publickey for core from 139.178.89.65 port 47768 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:07:19.169429 sshd-session[4422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:07:19.177898 systemd-logind[1560]: New session 26 of user core. Jul 7 09:07:19.186588 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 09:07:19.869798 sshd[4424]: Connection closed by 139.178.89.65 port 47768 Jul 7 09:07:19.871563 sshd-session[4422]: pam_unix(sshd:session): session closed for user core Jul 7 09:07:19.884447 systemd[1]: sshd@23-10.230.11.74:22-139.178.89.65:47768.service: Deactivated successfully. Jul 7 09:07:19.887238 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 09:07:19.889076 systemd-logind[1560]: Session 26 logged out. Waiting for processes to exit. Jul 7 09:07:19.890949 systemd-logind[1560]: Removed session 26. Jul 7 09:07:20.027914 systemd[1]: Started sshd@24-10.230.11.74:22-139.178.89.65:51124.service - OpenSSH per-connection server daemon (139.178.89.65:51124). Jul 7 09:07:20.947253 sshd[4435]: Accepted publickey for core from 139.178.89.65 port 51124 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:07:20.949647 sshd-session[4435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:07:20.960682 systemd-logind[1560]: New session 27 of user core. Jul 7 09:07:20.967538 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 09:07:23.299099 containerd[1583]: time="2025-07-07T09:07:23.297956446Z" level=info msg="StopContainer for \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" with timeout 30 (s)" Jul 7 09:07:23.300583 containerd[1583]: time="2025-07-07T09:07:23.300385063Z" level=info msg="Stop container \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" with signal terminated" Jul 7 09:07:23.327614 systemd[1]: cri-containerd-856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4.scope: Deactivated successfully. Jul 7 09:07:23.332403 containerd[1583]: time="2025-07-07T09:07:23.331954563Z" level=info msg="received exit event container_id:\"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" id:\"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" pid:3489 exited_at:{seconds:1751879243 nanos:331141237}" Jul 7 09:07:23.332726 containerd[1583]: time="2025-07-07T09:07:23.332540139Z" level=info msg="TaskExit event in podsandbox handler container_id:\"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" id:\"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" pid:3489 exited_at:{seconds:1751879243 nanos:331141237}" Jul 7 09:07:23.353900 containerd[1583]: time="2025-07-07T09:07:23.353851149Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 09:07:23.362175 containerd[1583]: time="2025-07-07T09:07:23.362080376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" id:\"26b708b3f2463d37a7e6eb74cb84ac302f8c1f69e1b2f9ebd4b5e992d9d074ae\" pid:4462 exited_at:{seconds:1751879243 nanos:361216922}" Jul 7 09:07:23.366879 containerd[1583]: time="2025-07-07T09:07:23.366720127Z" level=info msg="StopContainer for \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" with timeout 2 (s)" Jul 7 09:07:23.369769 containerd[1583]: time="2025-07-07T09:07:23.369578956Z" level=info msg="Stop container \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" with signal terminated" Jul 7 09:07:23.372056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4-rootfs.mount: Deactivated successfully. Jul 7 09:07:23.383786 systemd-networkd[1513]: lxc_health: Link DOWN Jul 7 09:07:23.383799 systemd-networkd[1513]: lxc_health: Lost carrier Jul 7 09:07:23.403730 containerd[1583]: time="2025-07-07T09:07:23.401856580Z" level=info msg="StopContainer for \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" returns successfully" Jul 7 09:07:23.403669 systemd[1]: cri-containerd-014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819.scope: Deactivated successfully. Jul 7 09:07:23.404511 systemd[1]: cri-containerd-014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819.scope: Consumed 10.300s CPU time, 215.7M memory peak, 98.6M read from disk, 13.3M written to disk. Jul 7 09:07:23.406446 containerd[1583]: time="2025-07-07T09:07:23.405691407Z" level=info msg="StopPodSandbox for \"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\"" Jul 7 09:07:23.406446 containerd[1583]: time="2025-07-07T09:07:23.405758845Z" level=info msg="Container to stop \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:07:23.408076 containerd[1583]: time="2025-07-07T09:07:23.408044748Z" level=info msg="received exit event container_id:\"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" id:\"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" pid:3522 exited_at:{seconds:1751879243 nanos:407849720}" Jul 7 09:07:23.410296 containerd[1583]: time="2025-07-07T09:07:23.410009842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" id:\"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" pid:3522 exited_at:{seconds:1751879243 nanos:407849720}" Jul 7 09:07:23.426757 systemd[1]: cri-containerd-fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618.scope: Deactivated successfully. Jul 7 09:07:23.431879 containerd[1583]: time="2025-07-07T09:07:23.431818356Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" id:\"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" pid:3109 exit_status:137 exited_at:{seconds:1751879243 nanos:431050369}" Jul 7 09:07:23.452789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819-rootfs.mount: Deactivated successfully. Jul 7 09:07:23.464797 containerd[1583]: time="2025-07-07T09:07:23.464390080Z" level=info msg="StopContainer for \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" returns successfully" Jul 7 09:07:23.466783 containerd[1583]: time="2025-07-07T09:07:23.466754799Z" level=info msg="StopPodSandbox for \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\"" Jul 7 09:07:23.467221 containerd[1583]: time="2025-07-07T09:07:23.467133658Z" level=info msg="Container to stop \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:07:23.467415 containerd[1583]: time="2025-07-07T09:07:23.467388481Z" level=info msg="Container to stop \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:07:23.467529 containerd[1583]: time="2025-07-07T09:07:23.467504485Z" level=info msg="Container to stop \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:07:23.467657 containerd[1583]: time="2025-07-07T09:07:23.467632710Z" level=info msg="Container to stop \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:07:23.467756 containerd[1583]: time="2025-07-07T09:07:23.467732963Z" level=info msg="Container to stop \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:07:23.477078 systemd[1]: cri-containerd-598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744.scope: Deactivated successfully. Jul 7 09:07:23.492933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618-rootfs.mount: Deactivated successfully. Jul 7 09:07:23.498546 containerd[1583]: time="2025-07-07T09:07:23.498485820Z" level=info msg="shim disconnected" id=fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618 namespace=k8s.io Jul 7 09:07:23.498546 containerd[1583]: time="2025-07-07T09:07:23.498538946Z" level=warning msg="cleaning up after shim disconnected" id=fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618 namespace=k8s.io Jul 7 09:07:23.503742 containerd[1583]: time="2025-07-07T09:07:23.498575097Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 09:07:23.527002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744-rootfs.mount: Deactivated successfully. Jul 7 09:07:23.532252 containerd[1583]: time="2025-07-07T09:07:23.532210666Z" level=info msg="shim disconnected" id=598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744 namespace=k8s.io Jul 7 09:07:23.532451 containerd[1583]: time="2025-07-07T09:07:23.532254020Z" level=warning msg="cleaning up after shim disconnected" id=598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744 namespace=k8s.io Jul 7 09:07:23.532451 containerd[1583]: time="2025-07-07T09:07:23.532269038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 09:07:23.535227 containerd[1583]: time="2025-07-07T09:07:23.535180425Z" level=error msg="Failed to handle event container_id:\"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" id:\"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" pid:3109 exit_status:137 exited_at:{seconds:1751879243 nanos:431050369} for fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Jul 7 09:07:23.535535 containerd[1583]: time="2025-07-07T09:07:23.535499117Z" level=info msg="TaskExit event in podsandbox handler container_id:\"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" id:\"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" pid:3035 exit_status:137 exited_at:{seconds:1751879243 nanos:479990751}" Jul 7 09:07:23.537559 containerd[1583]: time="2025-07-07T09:07:23.537044865Z" level=info msg="received exit event sandbox_id:\"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" exit_status:137 exited_at:{seconds:1751879243 nanos:431050369}" Jul 7 09:07:23.539269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744-shm.mount: Deactivated successfully. Jul 7 09:07:23.541361 containerd[1583]: time="2025-07-07T09:07:23.541159150Z" level=info msg="received exit event sandbox_id:\"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" exit_status:137 exited_at:{seconds:1751879243 nanos:479990751}" Jul 7 09:07:23.550728 containerd[1583]: time="2025-07-07T09:07:23.550627537Z" level=info msg="TearDown network for sandbox \"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" successfully" Jul 7 09:07:23.550864 containerd[1583]: time="2025-07-07T09:07:23.550838247Z" level=info msg="StopPodSandbox for \"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" returns successfully" Jul 7 09:07:23.556074 containerd[1583]: time="2025-07-07T09:07:23.556034945Z" level=info msg="TearDown network for sandbox \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" successfully" Jul 7 09:07:23.556074 containerd[1583]: time="2025-07-07T09:07:23.556065292Z" level=info msg="StopPodSandbox for \"598d209bd09cb05b1f0c7e71f007b5a4db2f21490b373e5fdf115dcede44d744\" returns successfully" Jul 7 09:07:23.684315 kubelet[2889]: I0707 09:07:23.683457 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-bpf-maps\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.684315 kubelet[2889]: I0707 09:07:23.683527 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-etc-cni-netd\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.684315 kubelet[2889]: I0707 09:07:23.683583 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-config-path\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.684315 kubelet[2889]: I0707 09:07:23.683620 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h9rv\" (UniqueName: \"kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-kube-api-access-9h9rv\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.684315 kubelet[2889]: I0707 09:07:23.683649 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hubble-tls\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.684315 kubelet[2889]: I0707 09:07:23.683643 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.685167 kubelet[2889]: I0707 09:07:23.683702 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.685167 kubelet[2889]: I0707 09:07:23.683672 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-xtables-lock\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.685167 kubelet[2889]: I0707 09:07:23.683739 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.685167 kubelet[2889]: I0707 09:07:23.683755 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-cilium-config-path\") pod \"e88b9eb8-5e24-4b56-bc1e-840fa55f589a\" (UID: \"e88b9eb8-5e24-4b56-bc1e-840fa55f589a\") " Jul 7 09:07:23.685167 kubelet[2889]: I0707 09:07:23.683783 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-kernel\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.686827 kubelet[2889]: I0707 09:07:23.683809 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-cgroup\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.686827 kubelet[2889]: I0707 09:07:23.683834 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-run\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.686827 kubelet[2889]: I0707 09:07:23.683861 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cni-path\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.686827 kubelet[2889]: I0707 09:07:23.683887 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-lib-modules\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.686827 kubelet[2889]: I0707 09:07:23.683909 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hostproc\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.686827 kubelet[2889]: I0707 09:07:23.683935 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp24x\" (UniqueName: \"kubernetes.io/projected/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-kube-api-access-tp24x\") pod \"e88b9eb8-5e24-4b56-bc1e-840fa55f589a\" (UID: \"e88b9eb8-5e24-4b56-bc1e-840fa55f589a\") " Jul 7 09:07:23.687980 kubelet[2889]: I0707 09:07:23.683965 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beb9fb49-7e83-435b-9f1f-2c3683ebe059-clustermesh-secrets\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.687980 kubelet[2889]: I0707 09:07:23.683990 2889 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-net\") pod \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\" (UID: \"beb9fb49-7e83-435b-9f1f-2c3683ebe059\") " Jul 7 09:07:23.687980 kubelet[2889]: I0707 09:07:23.684071 2889 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-xtables-lock\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.687980 kubelet[2889]: I0707 09:07:23.684112 2889 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-bpf-maps\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.687980 kubelet[2889]: I0707 09:07:23.684124 2889 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-etc-cni-netd\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.687980 kubelet[2889]: I0707 09:07:23.684157 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.688245 kubelet[2889]: I0707 09:07:23.686538 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.688245 kubelet[2889]: I0707 09:07:23.686594 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.688245 kubelet[2889]: I0707 09:07:23.686624 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.690745 kubelet[2889]: I0707 09:07:23.688490 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cni-path" (OuterVolumeSpecName: "cni-path") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.690745 kubelet[2889]: I0707 09:07:23.688542 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.690745 kubelet[2889]: I0707 09:07:23.688601 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hostproc" (OuterVolumeSpecName: "hostproc") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 7 09:07:23.701581 kubelet[2889]: I0707 09:07:23.701484 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-kube-api-access-tp24x" (OuterVolumeSpecName: "kube-api-access-tp24x") pod "e88b9eb8-5e24-4b56-bc1e-840fa55f589a" (UID: "e88b9eb8-5e24-4b56-bc1e-840fa55f589a"). InnerVolumeSpecName "kube-api-access-tp24x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 09:07:23.702311 kubelet[2889]: I0707 09:07:23.702264 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-kube-api-access-9h9rv" (OuterVolumeSpecName: "kube-api-access-9h9rv") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "kube-api-access-9h9rv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 09:07:23.702574 kubelet[2889]: I0707 09:07:23.702523 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 09:07:23.702917 kubelet[2889]: I0707 09:07:23.702790 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/beb9fb49-7e83-435b-9f1f-2c3683ebe059-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 09:07:23.703164 kubelet[2889]: I0707 09:07:23.703128 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "beb9fb49-7e83-435b-9f1f-2c3683ebe059" (UID: "beb9fb49-7e83-435b-9f1f-2c3683ebe059"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 09:07:23.706649 kubelet[2889]: I0707 09:07:23.701213 2889 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e88b9eb8-5e24-4b56-bc1e-840fa55f589a" (UID: "e88b9eb8-5e24-4b56-bc1e-840fa55f589a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784736 2889 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-config-path\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784781 2889 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9h9rv\" (UniqueName: \"kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-kube-api-access-9h9rv\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784802 2889 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hubble-tls\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784820 2889 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-cilium-config-path\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784843 2889 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-kernel\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784868 2889 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-cgroup\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784883 2889 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cilium-run\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785036 kubelet[2889]: I0707 09:07:23.784909 2889 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-cni-path\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785661 kubelet[2889]: I0707 09:07:23.784924 2889 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-lib-modules\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785661 kubelet[2889]: I0707 09:07:23.784938 2889 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-hostproc\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785661 kubelet[2889]: I0707 09:07:23.784951 2889 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/beb9fb49-7e83-435b-9f1f-2c3683ebe059-host-proc-sys-net\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785661 kubelet[2889]: I0707 09:07:23.784977 2889 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tp24x\" (UniqueName: \"kubernetes.io/projected/e88b9eb8-5e24-4b56-bc1e-840fa55f589a-kube-api-access-tp24x\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.785661 kubelet[2889]: I0707 09:07:23.784990 2889 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/beb9fb49-7e83-435b-9f1f-2c3683ebe059-clustermesh-secrets\") on node \"srv-djpnf.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:07:23.875870 kubelet[2889]: I0707 09:07:23.875174 2889 scope.go:117] "RemoveContainer" containerID="856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4" Jul 7 09:07:23.882924 systemd[1]: Removed slice kubepods-besteffort-pode88b9eb8_5e24_4b56_bc1e_840fa55f589a.slice - libcontainer container kubepods-besteffort-pode88b9eb8_5e24_4b56_bc1e_840fa55f589a.slice. Jul 7 09:07:23.886151 containerd[1583]: time="2025-07-07T09:07:23.886074387Z" level=info msg="RemoveContainer for \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\"" Jul 7 09:07:23.898646 containerd[1583]: time="2025-07-07T09:07:23.897282929Z" level=info msg="RemoveContainer for \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" returns successfully" Jul 7 09:07:23.904135 systemd[1]: Removed slice kubepods-burstable-podbeb9fb49_7e83_435b_9f1f_2c3683ebe059.slice - libcontainer container kubepods-burstable-podbeb9fb49_7e83_435b_9f1f_2c3683ebe059.slice. Jul 7 09:07:23.904313 systemd[1]: kubepods-burstable-podbeb9fb49_7e83_435b_9f1f_2c3683ebe059.slice: Consumed 10.439s CPU time, 216.1M memory peak, 98.6M read from disk, 13.3M written to disk. Jul 7 09:07:23.907652 kubelet[2889]: I0707 09:07:23.907256 2889 scope.go:117] "RemoveContainer" containerID="856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4" Jul 7 09:07:23.913060 containerd[1583]: time="2025-07-07T09:07:23.908005609Z" level=error msg="ContainerStatus for \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\": not found" Jul 7 09:07:23.913603 kubelet[2889]: E0707 09:07:23.913368 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\": not found" containerID="856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4" Jul 7 09:07:23.913869 kubelet[2889]: I0707 09:07:23.913538 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4"} err="failed to get container status \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"856c7d8fa63703686327dc6c86d2251df99e31842b4a13528a126378ebfb90e4\": not found" Jul 7 09:07:23.914066 kubelet[2889]: I0707 09:07:23.913980 2889 scope.go:117] "RemoveContainer" containerID="014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819" Jul 7 09:07:23.918156 containerd[1583]: time="2025-07-07T09:07:23.918068498Z" level=info msg="RemoveContainer for \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\"" Jul 7 09:07:23.928384 containerd[1583]: time="2025-07-07T09:07:23.928266501Z" level=info msg="RemoveContainer for \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" returns successfully" Jul 7 09:07:23.930612 kubelet[2889]: I0707 09:07:23.930581 2889 scope.go:117] "RemoveContainer" containerID="d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee" Jul 7 09:07:23.939026 containerd[1583]: time="2025-07-07T09:07:23.938392078Z" level=info msg="RemoveContainer for \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\"" Jul 7 09:07:23.943531 containerd[1583]: time="2025-07-07T09:07:23.943501098Z" level=info msg="RemoveContainer for \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" returns successfully" Jul 7 09:07:23.943926 kubelet[2889]: I0707 09:07:23.943898 2889 scope.go:117] "RemoveContainer" containerID="8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71" Jul 7 09:07:23.946987 containerd[1583]: time="2025-07-07T09:07:23.946954736Z" level=info msg="RemoveContainer for \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\"" Jul 7 09:07:23.951234 containerd[1583]: time="2025-07-07T09:07:23.951175830Z" level=info msg="RemoveContainer for \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" returns successfully" Jul 7 09:07:23.951534 kubelet[2889]: I0707 09:07:23.951498 2889 scope.go:117] "RemoveContainer" containerID="0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d" Jul 7 09:07:23.953728 containerd[1583]: time="2025-07-07T09:07:23.953683034Z" level=info msg="RemoveContainer for \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\"" Jul 7 09:07:23.957040 containerd[1583]: time="2025-07-07T09:07:23.956953261Z" level=info msg="RemoveContainer for \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" returns successfully" Jul 7 09:07:23.957199 kubelet[2889]: I0707 09:07:23.957132 2889 scope.go:117] "RemoveContainer" containerID="289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786" Jul 7 09:07:23.959146 containerd[1583]: time="2025-07-07T09:07:23.959114294Z" level=info msg="RemoveContainer for \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\"" Jul 7 09:07:23.962126 containerd[1583]: time="2025-07-07T09:07:23.962085687Z" level=info msg="RemoveContainer for \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" returns successfully" Jul 7 09:07:23.962403 kubelet[2889]: I0707 09:07:23.962365 2889 scope.go:117] "RemoveContainer" containerID="014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819" Jul 7 09:07:23.962649 containerd[1583]: time="2025-07-07T09:07:23.962592270Z" level=error msg="ContainerStatus for \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\": not found" Jul 7 09:07:23.962960 kubelet[2889]: E0707 09:07:23.962752 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\": not found" containerID="014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819" Jul 7 09:07:23.962960 kubelet[2889]: I0707 09:07:23.962789 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819"} err="failed to get container status \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\": rpc error: code = NotFound desc = an error occurred when try to find container \"014b7780cd63c04c51bb209421d2d6f27e39ac36f5a06ea727b8492e9f512819\": not found" Jul 7 09:07:23.962960 kubelet[2889]: I0707 09:07:23.962820 2889 scope.go:117] "RemoveContainer" containerID="d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee" Jul 7 09:07:23.963115 containerd[1583]: time="2025-07-07T09:07:23.963081031Z" level=error msg="ContainerStatus for \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\": not found" Jul 7 09:07:23.963494 kubelet[2889]: E0707 09:07:23.963355 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\": not found" containerID="d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee" Jul 7 09:07:23.963494 kubelet[2889]: I0707 09:07:23.963389 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee"} err="failed to get container status \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\": rpc error: code = NotFound desc = an error occurred when try to find container \"d84395d9023b404e9b90b89a7da8abd89003ea907efe1709bc9c78fa5e032aee\": not found" Jul 7 09:07:23.963494 kubelet[2889]: I0707 09:07:23.963411 2889 scope.go:117] "RemoveContainer" containerID="8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71" Jul 7 09:07:23.963850 containerd[1583]: time="2025-07-07T09:07:23.963681795Z" level=error msg="ContainerStatus for \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\": not found" Jul 7 09:07:23.964063 kubelet[2889]: E0707 09:07:23.964033 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\": not found" containerID="8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71" Jul 7 09:07:23.964201 kubelet[2889]: I0707 09:07:23.964171 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71"} err="failed to get container status \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e1c3527c7d4c449dfcca82d65f542275acbfa7d40769693ef3613db27446f71\": not found" Jul 7 09:07:23.964356 kubelet[2889]: I0707 09:07:23.964331 2889 scope.go:117] "RemoveContainer" containerID="0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d" Jul 7 09:07:23.964791 containerd[1583]: time="2025-07-07T09:07:23.964755635Z" level=error msg="ContainerStatus for \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\": not found" Jul 7 09:07:23.965109 kubelet[2889]: E0707 09:07:23.964956 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\": not found" containerID="0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d" Jul 7 09:07:23.965109 kubelet[2889]: I0707 09:07:23.964987 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d"} err="failed to get container status \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\": rpc error: code = NotFound desc = an error occurred when try to find container \"0de7a8120064993db82005b22640d1aa612f2bf630a6709a8d1e716b1ce7295d\": not found" Jul 7 09:07:23.965109 kubelet[2889]: I0707 09:07:23.965007 2889 scope.go:117] "RemoveContainer" containerID="289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786" Jul 7 09:07:23.965284 containerd[1583]: time="2025-07-07T09:07:23.965250987Z" level=error msg="ContainerStatus for \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\": not found" Jul 7 09:07:23.965610 kubelet[2889]: E0707 09:07:23.965468 2889 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\": not found" containerID="289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786" Jul 7 09:07:23.965610 kubelet[2889]: I0707 09:07:23.965553 2889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786"} err="failed to get container status \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\": rpc error: code = NotFound desc = an error occurred when try to find container \"289104299ce3a03a585fe71909d163e4b999d1ed9290db2bab4779aa16242786\": not found" Jul 7 09:07:24.370705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618-shm.mount: Deactivated successfully. Jul 7 09:07:24.370847 systemd[1]: var-lib-kubelet-pods-e88b9eb8\x2d5e24\x2d4b56\x2dbc1e\x2d840fa55f589a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtp24x.mount: Deactivated successfully. Jul 7 09:07:24.370989 systemd[1]: var-lib-kubelet-pods-beb9fb49\x2d7e83\x2d435b\x2d9f1f\x2d2c3683ebe059-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9h9rv.mount: Deactivated successfully. Jul 7 09:07:24.371141 systemd[1]: var-lib-kubelet-pods-beb9fb49\x2d7e83\x2d435b\x2d9f1f\x2d2c3683ebe059-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 09:07:24.371253 systemd[1]: var-lib-kubelet-pods-beb9fb49\x2d7e83\x2d435b\x2d9f1f\x2d2c3683ebe059-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 09:07:25.127520 kubelet[2889]: I0707 09:07:25.127464 2889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beb9fb49-7e83-435b-9f1f-2c3683ebe059" path="/var/lib/kubelet/pods/beb9fb49-7e83-435b-9f1f-2c3683ebe059/volumes" Jul 7 09:07:25.129137 kubelet[2889]: I0707 09:07:25.129089 2889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88b9eb8-5e24-4b56-bc1e-840fa55f589a" path="/var/lib/kubelet/pods/e88b9eb8-5e24-4b56-bc1e-840fa55f589a/volumes" Jul 7 09:07:25.363114 sshd[4437]: Connection closed by 139.178.89.65 port 51124 Jul 7 09:07:25.361923 sshd-session[4435]: pam_unix(sshd:session): session closed for user core Jul 7 09:07:25.367283 systemd[1]: sshd@24-10.230.11.74:22-139.178.89.65:51124.service: Deactivated successfully. Jul 7 09:07:25.370740 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 09:07:25.371576 systemd[1]: session-27.scope: Consumed 1.203s CPU time, 26.2M memory peak. Jul 7 09:07:25.373386 systemd-logind[1560]: Session 27 logged out. Waiting for processes to exit. Jul 7 09:07:25.375080 systemd-logind[1560]: Removed session 27. Jul 7 09:07:25.436372 containerd[1583]: time="2025-07-07T09:07:25.436159157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" id:\"fd7c3722e86cbf0ab938b08f4e68cb946a50f113ff6fb823d9d5d1e88f2f7618\" pid:3109 exit_status:137 exited_at:{seconds:1751879243 nanos:431050369}" Jul 7 09:07:25.517066 systemd[1]: Started sshd@25-10.230.11.74:22-139.178.89.65:51132.service - OpenSSH per-connection server daemon (139.178.89.65:51132). Jul 7 09:07:26.381582 kubelet[2889]: E0707 09:07:26.381509 2889 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 09:07:26.421323 sshd[4591]: Accepted publickey for core from 139.178.89.65 port 51132 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:07:26.423135 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:07:26.431302 systemd-logind[1560]: New session 28 of user core. Jul 7 09:07:26.436487 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 09:07:28.116568 kubelet[2889]: I0707 09:07:28.116514 2889 memory_manager.go:355] "RemoveStaleState removing state" podUID="e88b9eb8-5e24-4b56-bc1e-840fa55f589a" containerName="cilium-operator" Jul 7 09:07:28.116568 kubelet[2889]: I0707 09:07:28.116557 2889 memory_manager.go:355] "RemoveStaleState removing state" podUID="beb9fb49-7e83-435b-9f1f-2c3683ebe059" containerName="cilium-agent" Jul 7 09:07:28.130920 systemd[1]: Created slice kubepods-burstable-pod4ab9ca5a_931e_4195_b1d1_eaaac3f15c70.slice - libcontainer container kubepods-burstable-pod4ab9ca5a_931e_4195_b1d1_eaaac3f15c70.slice. Jul 7 09:07:28.211445 kubelet[2889]: I0707 09:07:28.211361 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-host-proc-sys-net\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.211662 kubelet[2889]: I0707 09:07:28.211466 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-cilium-config-path\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.211662 kubelet[2889]: I0707 09:07:28.211574 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-bpf-maps\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.211775 kubelet[2889]: I0707 09:07:28.211717 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-lib-modules\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.211775 kubelet[2889]: I0707 09:07:28.211765 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-cilium-cgroup\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.211953 kubelet[2889]: I0707 09:07:28.211840 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-clustermesh-secrets\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212026 kubelet[2889]: I0707 09:07:28.212002 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxgzb\" (UniqueName: \"kubernetes.io/projected/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-kube-api-access-xxgzb\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212105 kubelet[2889]: I0707 09:07:28.212077 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-xtables-lock\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212211 kubelet[2889]: I0707 09:07:28.212155 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-hubble-tls\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212273 kubelet[2889]: I0707 09:07:28.212248 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-cilium-run\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212350 kubelet[2889]: I0707 09:07:28.212318 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-hostproc\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212401 kubelet[2889]: I0707 09:07:28.212352 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-cni-path\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212448 kubelet[2889]: I0707 09:07:28.212433 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-etc-cni-netd\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212597 kubelet[2889]: I0707 09:07:28.212524 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-cilium-ipsec-secrets\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.212676 kubelet[2889]: I0707 09:07:28.212644 2889 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4ab9ca5a-931e-4195-b1d1-eaaac3f15c70-host-proc-sys-kernel\") pod \"cilium-lxbrh\" (UID: \"4ab9ca5a-931e-4195-b1d1-eaaac3f15c70\") " pod="kube-system/cilium-lxbrh" Jul 7 09:07:28.306386 sshd[4593]: Connection closed by 139.178.89.65 port 51132 Jul 7 09:07:28.307199 sshd-session[4591]: pam_unix(sshd:session): session closed for user core Jul 7 09:07:28.315432 systemd[1]: sshd@25-10.230.11.74:22-139.178.89.65:51132.service: Deactivated successfully. Jul 7 09:07:28.321933 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 09:07:28.322225 systemd[1]: session-28.scope: Consumed 1.131s CPU time, 25.7M memory peak. Jul 7 09:07:28.325611 systemd-logind[1560]: Session 28 logged out. Waiting for processes to exit. Jul 7 09:07:28.352641 systemd-logind[1560]: Removed session 28. Jul 7 09:07:28.442386 containerd[1583]: time="2025-07-07T09:07:28.442147424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxbrh,Uid:4ab9ca5a-931e-4195-b1d1-eaaac3f15c70,Namespace:kube-system,Attempt:0,}" Jul 7 09:07:28.474999 systemd[1]: Started sshd@26-10.230.11.74:22-139.178.89.65:51142.service - OpenSSH per-connection server daemon (139.178.89.65:51142). Jul 7 09:07:28.477451 containerd[1583]: time="2025-07-07T09:07:28.477226907Z" level=info msg="connecting to shim 117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175" address="unix:///run/containerd/s/d805f51097be4b99f9157f79d7460ee7948ef2fa3900899d41fa57d3f8008bd3" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:07:28.523548 systemd[1]: Started cri-containerd-117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175.scope - libcontainer container 117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175. Jul 7 09:07:28.558945 containerd[1583]: time="2025-07-07T09:07:28.558890836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lxbrh,Uid:4ab9ca5a-931e-4195-b1d1-eaaac3f15c70,Namespace:kube-system,Attempt:0,} returns sandbox id \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\"" Jul 7 09:07:28.564262 containerd[1583]: time="2025-07-07T09:07:28.564194285Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 09:07:28.572930 containerd[1583]: time="2025-07-07T09:07:28.572900451Z" level=info msg="Container 2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:07:28.581674 containerd[1583]: time="2025-07-07T09:07:28.581575240Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2\"" Jul 7 09:07:28.582341 containerd[1583]: time="2025-07-07T09:07:28.582201744Z" level=info msg="StartContainer for \"2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2\"" Jul 7 09:07:28.583622 containerd[1583]: time="2025-07-07T09:07:28.583587930Z" level=info msg="connecting to shim 2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2" address="unix:///run/containerd/s/d805f51097be4b99f9157f79d7460ee7948ef2fa3900899d41fa57d3f8008bd3" protocol=ttrpc version=3 Jul 7 09:07:28.611579 systemd[1]: Started cri-containerd-2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2.scope - libcontainer container 2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2. Jul 7 09:07:28.652701 containerd[1583]: time="2025-07-07T09:07:28.652608135Z" level=info msg="StartContainer for \"2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2\" returns successfully" Jul 7 09:07:28.669791 systemd[1]: cri-containerd-2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2.scope: Deactivated successfully. Jul 7 09:07:28.670743 systemd[1]: cri-containerd-2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2.scope: Consumed 29ms CPU time, 9.5M memory peak, 3.1M read from disk. Jul 7 09:07:28.674415 containerd[1583]: time="2025-07-07T09:07:28.674156093Z" level=info msg="received exit event container_id:\"2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2\" id:\"2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2\" pid:4672 exited_at:{seconds:1751879248 nanos:673666330}" Jul 7 09:07:28.674685 containerd[1583]: time="2025-07-07T09:07:28.674652411Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2\" id:\"2b35022c1791e9d792f10a6675e94c594bcb965207adeae3c4da3af9e0653ac2\" pid:4672 exited_at:{seconds:1751879248 nanos:673666330}" Jul 7 09:07:28.918589 containerd[1583]: time="2025-07-07T09:07:28.918339265Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 09:07:28.925606 containerd[1583]: time="2025-07-07T09:07:28.925573854Z" level=info msg="Container 5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:07:28.931733 containerd[1583]: time="2025-07-07T09:07:28.931694143Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247\"" Jul 7 09:07:28.933000 containerd[1583]: time="2025-07-07T09:07:28.932871291Z" level=info msg="StartContainer for \"5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247\"" Jul 7 09:07:28.934697 containerd[1583]: time="2025-07-07T09:07:28.934631383Z" level=info msg="connecting to shim 5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247" address="unix:///run/containerd/s/d805f51097be4b99f9157f79d7460ee7948ef2fa3900899d41fa57d3f8008bd3" protocol=ttrpc version=3 Jul 7 09:07:28.963537 systemd[1]: Started cri-containerd-5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247.scope - libcontainer container 5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247. Jul 7 09:07:29.005590 containerd[1583]: time="2025-07-07T09:07:29.005539054Z" level=info msg="StartContainer for \"5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247\" returns successfully" Jul 7 09:07:29.016805 systemd[1]: cri-containerd-5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247.scope: Deactivated successfully. Jul 7 09:07:29.017926 systemd[1]: cri-containerd-5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247.scope: Consumed 26ms CPU time, 7.6M memory peak, 2.2M read from disk. Jul 7 09:07:29.018795 containerd[1583]: time="2025-07-07T09:07:29.017873405Z" level=info msg="received exit event container_id:\"5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247\" id:\"5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247\" pid:4718 exited_at:{seconds:1751879249 nanos:17260336}" Jul 7 09:07:29.018795 containerd[1583]: time="2025-07-07T09:07:29.017922236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247\" id:\"5dfb719c58acd6ba789c30b823853aa661a273987006c34e69c3e0bfca7af247\" pid:4718 exited_at:{seconds:1751879249 nanos:17260336}" Jul 7 09:07:29.393634 sshd[4620]: Accepted publickey for core from 139.178.89.65 port 51142 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:07:29.395547 sshd-session[4620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:07:29.402365 systemd-logind[1560]: New session 29 of user core. Jul 7 09:07:29.409563 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 09:07:29.921570 containerd[1583]: time="2025-07-07T09:07:29.921021920Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 09:07:29.936716 containerd[1583]: time="2025-07-07T09:07:29.935486301Z" level=info msg="Container 088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:07:29.947974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337168243.mount: Deactivated successfully. Jul 7 09:07:29.951082 containerd[1583]: time="2025-07-07T09:07:29.951022205Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006\"" Jul 7 09:07:29.952749 containerd[1583]: time="2025-07-07T09:07:29.952701331Z" level=info msg="StartContainer for \"088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006\"" Jul 7 09:07:29.954540 containerd[1583]: time="2025-07-07T09:07:29.954505337Z" level=info msg="connecting to shim 088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006" address="unix:///run/containerd/s/d805f51097be4b99f9157f79d7460ee7948ef2fa3900899d41fa57d3f8008bd3" protocol=ttrpc version=3 Jul 7 09:07:29.989552 systemd[1]: Started cri-containerd-088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006.scope - libcontainer container 088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006. Jul 7 09:07:30.007321 sshd[4750]: Connection closed by 139.178.89.65 port 51142 Jul 7 09:07:30.008467 sshd-session[4620]: pam_unix(sshd:session): session closed for user core Jul 7 09:07:30.016623 systemd[1]: sshd@26-10.230.11.74:22-139.178.89.65:51142.service: Deactivated successfully. Jul 7 09:07:30.022349 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 09:07:30.026833 systemd-logind[1560]: Session 29 logged out. Waiting for processes to exit. Jul 7 09:07:30.030105 systemd-logind[1560]: Removed session 29. Jul 7 09:07:30.059472 containerd[1583]: time="2025-07-07T09:07:30.059412169Z" level=info msg="StartContainer for \"088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006\" returns successfully" Jul 7 09:07:30.066915 systemd[1]: cri-containerd-088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006.scope: Deactivated successfully. Jul 7 09:07:30.067840 systemd[1]: cri-containerd-088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006.scope: Consumed 40ms CPU time, 5.8M memory peak, 1M read from disk. Jul 7 09:07:30.071772 containerd[1583]: time="2025-07-07T09:07:30.071722078Z" level=info msg="received exit event container_id:\"088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006\" id:\"088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006\" pid:4767 exited_at:{seconds:1751879250 nanos:71097583}" Jul 7 09:07:30.072009 containerd[1583]: time="2025-07-07T09:07:30.071884741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006\" id:\"088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006\" pid:4767 exited_at:{seconds:1751879250 nanos:71097583}" Jul 7 09:07:30.100243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-088ed02d4f6448bd85962b8023b4f86d128fb0c4f037bab90259f97ca6a48006-rootfs.mount: Deactivated successfully. Jul 7 09:07:30.166948 systemd[1]: Started sshd@27-10.230.11.74:22-139.178.89.65:56210.service - OpenSSH per-connection server daemon (139.178.89.65:56210). Jul 7 09:07:30.932034 containerd[1583]: time="2025-07-07T09:07:30.931931488Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 09:07:30.948356 containerd[1583]: time="2025-07-07T09:07:30.944644502Z" level=info msg="Container 0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:07:30.955977 containerd[1583]: time="2025-07-07T09:07:30.955934956Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f\"" Jul 7 09:07:30.958064 containerd[1583]: time="2025-07-07T09:07:30.958031527Z" level=info msg="StartContainer for \"0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f\"" Jul 7 09:07:30.963867 containerd[1583]: time="2025-07-07T09:07:30.962503964Z" level=info msg="connecting to shim 0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f" address="unix:///run/containerd/s/d805f51097be4b99f9157f79d7460ee7948ef2fa3900899d41fa57d3f8008bd3" protocol=ttrpc version=3 Jul 7 09:07:30.991544 systemd[1]: Started cri-containerd-0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f.scope - libcontainer container 0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f. Jul 7 09:07:31.035906 systemd[1]: cri-containerd-0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f.scope: Deactivated successfully. Jul 7 09:07:31.040186 containerd[1583]: time="2025-07-07T09:07:31.039823669Z" level=info msg="received exit event container_id:\"0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f\" id:\"0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f\" pid:4813 exited_at:{seconds:1751879251 nanos:39500809}" Jul 7 09:07:31.040691 containerd[1583]: time="2025-07-07T09:07:31.040655712Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f\" id:\"0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f\" pid:4813 exited_at:{seconds:1751879251 nanos:39500809}" Jul 7 09:07:31.053408 containerd[1583]: time="2025-07-07T09:07:31.053369220Z" level=info msg="StartContainer for \"0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f\" returns successfully" Jul 7 09:07:31.077805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0efea44d3dea4848d346b8a08491ac36afe0c53b7ab61dcae90db8bf1b02543f-rootfs.mount: Deactivated successfully. Jul 7 09:07:31.085360 sshd[4800]: Accepted publickey for core from 139.178.89.65 port 56210 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:07:31.088704 sshd-session[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:07:31.096682 systemd-logind[1560]: New session 30 of user core. Jul 7 09:07:31.107560 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 7 09:07:31.383702 kubelet[2889]: E0707 09:07:31.383550 2889 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 09:07:31.936412 containerd[1583]: time="2025-07-07T09:07:31.936035750Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 09:07:31.953050 containerd[1583]: time="2025-07-07T09:07:31.952193431Z" level=info msg="Container 470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:07:31.959454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349898355.mount: Deactivated successfully. Jul 7 09:07:31.967114 containerd[1583]: time="2025-07-07T09:07:31.967060033Z" level=info msg="CreateContainer within sandbox \"117eef6282792b97a3f830253a41ab55019c70311227fae6783652fe48626175\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\"" Jul 7 09:07:31.971195 containerd[1583]: time="2025-07-07T09:07:31.971162876Z" level=info msg="StartContainer for \"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\"" Jul 7 09:07:31.973081 containerd[1583]: time="2025-07-07T09:07:31.973041789Z" level=info msg="connecting to shim 470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9" address="unix:///run/containerd/s/d805f51097be4b99f9157f79d7460ee7948ef2fa3900899d41fa57d3f8008bd3" protocol=ttrpc version=3 Jul 7 09:07:32.012522 systemd[1]: Started cri-containerd-470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9.scope - libcontainer container 470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9. Jul 7 09:07:32.069958 containerd[1583]: time="2025-07-07T09:07:32.069912742Z" level=info msg="StartContainer for \"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\" returns successfully" Jul 7 09:07:32.181809 containerd[1583]: time="2025-07-07T09:07:32.181731033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\" id:\"d46c9cc7884d6ca2ee8229415e16bf9709a6fac451ac20a3713c01c032541e1c\" pid:4890 exited_at:{seconds:1751879252 nanos:180936239}" Jul 7 09:07:32.835483 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 09:07:33.946838 containerd[1583]: time="2025-07-07T09:07:33.946746424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\" id:\"1d53aba25f7e90b65891f6fe3b08c565e0813efee69d1aa70a38dc3d6b0c602e\" pid:4966 exit_status:1 exited_at:{seconds:1751879253 nanos:939125769}" Jul 7 09:07:34.871086 kubelet[2889]: I0707 09:07:34.871014 2889 setters.go:602] "Node became not ready" node="srv-djpnf.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T09:07:34Z","lastTransitionTime":"2025-07-07T09:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 09:07:36.194031 containerd[1583]: time="2025-07-07T09:07:36.193945366Z" level=info msg="TaskExit event in podsandbox handler container_id:\"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\" id:\"62b8151b49ff446905b9f97cec0e4646d7f6a5a630bd1c9b1bf23c58e8893723\" pid:5324 exit_status:1 exited_at:{seconds:1751879256 nanos:193543020}" Jul 7 09:07:36.449077 systemd-networkd[1513]: lxc_health: Link UP Jul 7 09:07:36.457014 systemd-networkd[1513]: lxc_health: Gained carrier Jul 7 09:07:36.494122 kubelet[2889]: I0707 09:07:36.493878 2889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lxbrh" podStartSLOduration=8.493845464 podStartE2EDuration="8.493845464s" podCreationTimestamp="2025-07-07 09:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:07:32.976629961 +0000 UTC m=+162.020520991" watchObservedRunningTime="2025-07-07 09:07:36.493845464 +0000 UTC m=+165.537736496" Jul 7 09:07:37.949554 systemd-networkd[1513]: lxc_health: Gained IPv6LL Jul 7 09:07:38.379765 containerd[1583]: time="2025-07-07T09:07:38.379699614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\" id:\"455dbd2e45cc1d331d14781c3783e15b528406f231f14f7cf48f256bba3cf5a1\" pid:5433 exited_at:{seconds:1751879258 nanos:378874088}" Jul 7 09:07:40.610343 containerd[1583]: time="2025-07-07T09:07:40.610169921Z" level=info msg="TaskExit event in podsandbox handler container_id:\"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\" id:\"ec4b69273a25bce78b658e46e6944f70451f516da844158f4582886f22396cf1\" pid:5459 exited_at:{seconds:1751879260 nanos:608718324}" Jul 7 09:07:40.617514 kubelet[2889]: E0707 09:07:40.617464 2889 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:39730->127.0.0.1:42865: write tcp 127.0.0.1:39730->127.0.0.1:42865: write: broken pipe Jul 7 09:07:42.835407 containerd[1583]: time="2025-07-07T09:07:42.835270246Z" level=info msg="TaskExit event in podsandbox handler container_id:\"470662bd21f01761950b60137351f53338aad1b4fbcbc9e672bfae616adf47d9\" id:\"26c9940bd67e55eeadc44c8e4dc178467ea69c43db7276dd174898effe3a653e\" pid:5493 exited_at:{seconds:1751879262 nanos:833790561}" Jul 7 09:07:42.985782 sshd[4839]: Connection closed by 139.178.89.65 port 56210 Jul 7 09:07:42.988530 sshd-session[4800]: pam_unix(sshd:session): session closed for user core Jul 7 09:07:42.996824 systemd[1]: sshd@27-10.230.11.74:22-139.178.89.65:56210.service: Deactivated successfully. Jul 7 09:07:43.003368 systemd[1]: session-30.scope: Deactivated successfully. Jul 7 09:07:43.005359 systemd-logind[1560]: Session 30 logged out. Waiting for processes to exit. Jul 7 09:07:43.008881 systemd-logind[1560]: Removed session 30.