Jul 7 09:22:38.951626 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 09:22:38.951660 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 09:22:38.951679 kernel: BIOS-provided physical RAM map: Jul 7 09:22:38.951689 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 09:22:38.951698 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 09:22:38.951708 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 09:22:38.951722 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 7 09:22:38.951738 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 7 09:22:38.951749 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 09:22:38.951759 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 09:22:38.951774 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 09:22:38.951784 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 09:22:38.951794 kernel: NX (Execute Disable) protection: active Jul 7 09:22:38.951804 kernel: APIC: Static calls initialized Jul 7 09:22:38.951816 kernel: SMBIOS 2.8 present. Jul 7 09:22:38.951827 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Jul 7 09:22:38.951848 kernel: DMI: Memory slots populated: 1/1 Jul 7 09:22:38.951859 kernel: Hypervisor detected: KVM Jul 7 09:22:38.951870 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 09:22:38.951880 kernel: kvm-clock: using sched offset of 6405298007 cycles Jul 7 09:22:38.951892 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 09:22:38.951903 kernel: tsc: Detected 2799.998 MHz processor Jul 7 09:22:38.951914 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 09:22:38.951925 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 09:22:38.951936 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 7 09:22:38.951952 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 09:22:38.951962 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 09:22:38.951973 kernel: Using GB pages for direct mapping Jul 7 09:22:38.951984 kernel: ACPI: Early table checksum verification disabled Jul 7 09:22:38.951995 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Jul 7 09:22:38.952006 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:22:38.952017 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:22:38.952027 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:22:38.952038 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 7 09:22:38.952053 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:22:38.952064 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:22:38.952075 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:22:38.952086 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 09:22:38.952097 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 7 09:22:38.952108 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 7 09:22:38.952124 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 7 09:22:38.952140 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 7 09:22:38.952151 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 7 09:22:38.952162 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 7 09:22:38.952173 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 7 09:22:38.952210 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 09:22:38.952224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 09:22:38.952235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 7 09:22:38.952253 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jul 7 09:22:38.952264 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jul 7 09:22:38.952287 kernel: Zone ranges: Jul 7 09:22:38.952300 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 09:22:38.952311 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 7 09:22:38.952322 kernel: Normal empty Jul 7 09:22:38.952333 kernel: Device empty Jul 7 09:22:38.952345 kernel: Movable zone start for each node Jul 7 09:22:38.952356 kernel: Early memory node ranges Jul 7 09:22:38.952373 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 09:22:38.952384 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 7 09:22:38.952396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 7 09:22:38.952407 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 09:22:38.952418 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 09:22:38.952429 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 7 09:22:38.952446 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 09:22:38.952458 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 09:22:38.952474 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 09:22:38.952485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 09:22:38.952502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 09:22:38.952513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 09:22:38.952525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 09:22:38.952536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 09:22:38.952547 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 09:22:38.952558 kernel: TSC deadline timer available Jul 7 09:22:38.952569 kernel: CPU topo: Max. logical packages: 16 Jul 7 09:22:38.952581 kernel: CPU topo: Max. logical dies: 16 Jul 7 09:22:38.952592 kernel: CPU topo: Max. dies per package: 1 Jul 7 09:22:38.952607 kernel: CPU topo: Max. threads per core: 1 Jul 7 09:22:38.952619 kernel: CPU topo: Num. cores per package: 1 Jul 7 09:22:38.952630 kernel: CPU topo: Num. threads per package: 1 Jul 7 09:22:38.952641 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jul 7 09:22:38.952652 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 09:22:38.952663 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 09:22:38.952674 kernel: Booting paravirtualized kernel on KVM Jul 7 09:22:38.952686 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 09:22:38.952697 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 09:22:38.952713 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jul 7 09:22:38.952724 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jul 7 09:22:38.952735 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 09:22:38.952746 kernel: kvm-guest: PV spinlocks enabled Jul 7 09:22:38.952758 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 09:22:38.952770 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 09:22:38.952782 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 09:22:38.952793 kernel: random: crng init done Jul 7 09:22:38.952809 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 09:22:38.952820 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 09:22:38.952832 kernel: Fallback order for Node 0: 0 Jul 7 09:22:38.952843 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jul 7 09:22:38.952854 kernel: Policy zone: DMA32 Jul 7 09:22:38.952865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 09:22:38.952877 kernel: software IO TLB: area num 16. Jul 7 09:22:38.952888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 09:22:38.952899 kernel: Kernel/User page tables isolation: enabled Jul 7 09:22:38.952915 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 09:22:38.952926 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 09:22:38.952938 kernel: Dynamic Preempt: voluntary Jul 7 09:22:38.952949 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 09:22:38.952961 kernel: rcu: RCU event tracing is enabled. Jul 7 09:22:38.952973 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 09:22:38.952984 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 09:22:38.953001 kernel: Rude variant of Tasks RCU enabled. Jul 7 09:22:38.953013 kernel: Tracing variant of Tasks RCU enabled. Jul 7 09:22:38.953030 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 09:22:38.953041 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 09:22:38.953053 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 09:22:38.953064 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 09:22:38.953076 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 09:22:38.953087 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 7 09:22:38.953099 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 09:22:38.953123 kernel: Console: colour VGA+ 80x25 Jul 7 09:22:38.953135 kernel: printk: legacy console [tty0] enabled Jul 7 09:22:38.953152 kernel: printk: legacy console [ttyS0] enabled Jul 7 09:22:38.953164 kernel: ACPI: Core revision 20240827 Jul 7 09:22:38.953223 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 09:22:38.953245 kernel: x2apic enabled Jul 7 09:22:38.953258 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 09:22:38.953270 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jul 7 09:22:38.953294 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jul 7 09:22:38.953318 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 09:22:38.953332 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 09:22:38.953344 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 09:22:38.953356 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 09:22:38.953367 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 09:22:38.953379 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 09:22:38.953391 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 09:22:38.953403 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 09:22:38.953414 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 09:22:38.953426 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 09:22:38.953437 kernel: MMIO Stale Data: Unknown: No mitigations Jul 7 09:22:38.953454 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 7 09:22:38.953466 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 09:22:38.953478 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 09:22:38.953489 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 09:22:38.953501 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 09:22:38.953513 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 09:22:38.953525 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 09:22:38.953537 kernel: Freeing SMP alternatives memory: 32K Jul 7 09:22:38.953548 kernel: pid_max: default: 32768 minimum: 301 Jul 7 09:22:38.953560 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 09:22:38.953571 kernel: landlock: Up and running. Jul 7 09:22:38.953587 kernel: SELinux: Initializing. Jul 7 09:22:38.953599 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 09:22:38.953611 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 09:22:38.953623 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 7 09:22:38.953635 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 7 09:22:38.953647 kernel: signal: max sigframe size: 1776 Jul 7 09:22:38.953659 kernel: rcu: Hierarchical SRCU implementation. Jul 7 09:22:38.953677 kernel: rcu: Max phase no-delay instances is 400. Jul 7 09:22:38.953690 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jul 7 09:22:38.953702 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 09:22:38.953720 kernel: smp: Bringing up secondary CPUs ... Jul 7 09:22:38.953732 kernel: smpboot: x86: Booting SMP configuration: Jul 7 09:22:38.953744 kernel: .... node #0, CPUs: #1 Jul 7 09:22:38.953755 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 09:22:38.953767 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jul 7 09:22:38.953780 kernel: Memory: 1895668K/2096616K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 194936K reserved, 0K cma-reserved) Jul 7 09:22:38.953792 kernel: devtmpfs: initialized Jul 7 09:22:38.953804 kernel: x86/mm: Memory block size: 128MB Jul 7 09:22:38.953816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 09:22:38.953833 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 09:22:38.953845 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 09:22:38.953856 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 09:22:38.953868 kernel: audit: initializing netlink subsys (disabled) Jul 7 09:22:38.953880 kernel: audit: type=2000 audit(1751880155.384:1): state=initialized audit_enabled=0 res=1 Jul 7 09:22:38.953892 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 09:22:38.953904 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 09:22:38.953915 kernel: cpuidle: using governor menu Jul 7 09:22:38.953927 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 09:22:38.953944 kernel: dca service started, version 1.12.1 Jul 7 09:22:38.953956 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jul 7 09:22:38.953968 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 09:22:38.953980 kernel: PCI: Using configuration type 1 for base access Jul 7 09:22:38.953992 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 09:22:38.954004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 09:22:38.954028 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 09:22:38.954039 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 09:22:38.954050 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 09:22:38.954066 kernel: ACPI: Added _OSI(Module Device) Jul 7 09:22:38.954078 kernel: ACPI: Added _OSI(Processor Device) Jul 7 09:22:38.954089 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 09:22:38.954104 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 09:22:38.954128 kernel: ACPI: Interpreter enabled Jul 7 09:22:38.954140 kernel: ACPI: PM: (supports S0 S5) Jul 7 09:22:38.954151 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 09:22:38.954163 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 09:22:38.954175 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 09:22:38.954191 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 09:22:38.954203 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 09:22:38.954485 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 09:22:38.954655 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 09:22:38.954815 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 09:22:38.954833 kernel: PCI host bridge to bus 0000:00 Jul 7 09:22:38.955022 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 09:22:38.955183 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 09:22:38.955696 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 09:22:38.955845 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 7 09:22:38.955989 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 09:22:38.956165 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 7 09:22:38.956340 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 09:22:38.956536 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 09:22:38.956754 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jul 7 09:22:38.956939 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jul 7 09:22:38.957146 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jul 7 09:22:38.957365 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jul 7 09:22:38.957527 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 09:22:38.957730 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.957899 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jul 7 09:22:38.958074 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 09:22:38.958260 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jul 7 09:22:38.958451 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 09:22:38.958612 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:22:38.958808 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.958971 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jul 7 09:22:38.959171 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 09:22:38.959775 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 09:22:38.959942 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 09:22:38.960112 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.960350 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jul 7 09:22:38.960515 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 09:22:38.960675 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 09:22:38.960843 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 09:22:38.961031 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.961251 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jul 7 09:22:38.961467 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 09:22:38.962354 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 09:22:38.962522 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 09:22:38.962712 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.962884 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jul 7 09:22:38.963046 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 09:22:38.964248 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 09:22:38.964435 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 09:22:38.964644 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.964824 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jul 7 09:22:38.964985 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 09:22:38.965154 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 09:22:38.965398 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 09:22:38.965586 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.965748 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jul 7 09:22:38.965908 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 09:22:38.966066 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 09:22:38.967736 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 09:22:38.967921 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jul 7 09:22:38.968100 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jul 7 09:22:38.968310 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 09:22:38.968475 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 09:22:38.968645 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 09:22:38.968845 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 09:22:38.969016 kernel: pci 0000:00:03.0: BAR 0 [io 0xd0c0-0xd0df] Jul 7 09:22:38.969176 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jul 7 09:22:38.969383 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jul 7 09:22:38.969557 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jul 7 09:22:38.969766 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 09:22:38.969928 kernel: pci 0000:00:04.0: BAR 0 [io 0xd000-0xd07f] Jul 7 09:22:38.970087 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jul 7 09:22:38.971360 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jul 7 09:22:38.971539 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 09:22:38.971702 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 09:22:38.971889 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 09:22:38.972070 kernel: pci 0000:00:1f.2: BAR 4 [io 0xd0e0-0xd0ff] Jul 7 09:22:38.972251 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jul 7 09:22:38.972473 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 09:22:38.972653 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jul 7 09:22:38.972827 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jul 7 09:22:38.973016 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jul 7 09:22:38.973179 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 09:22:38.975421 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jul 7 09:22:38.975615 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 09:22:38.975797 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:22:38.975971 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 09:22:38.976173 kernel: pci_bus 0000:02: extended config space not accessible Jul 7 09:22:38.976400 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jul 7 09:22:38.976585 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jul 7 09:22:38.976773 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 09:22:38.976985 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jul 7 09:22:38.977166 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jul 7 09:22:38.980376 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 09:22:38.980578 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jul 7 09:22:38.980755 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jul 7 09:22:38.980934 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 09:22:38.981113 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 09:22:38.981311 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 09:22:38.981487 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 09:22:38.981653 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 09:22:38.981827 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 09:22:38.981846 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 09:22:38.981859 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 09:22:38.981871 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 09:22:38.981883 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 09:22:38.981908 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 09:22:38.981926 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 09:22:38.981938 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 09:22:38.981950 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 09:22:38.981963 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 09:22:38.981974 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 09:22:38.981987 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 09:22:38.981999 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 09:22:38.982011 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 09:22:38.982023 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 09:22:38.982039 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 09:22:38.982051 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 09:22:38.982064 kernel: iommu: Default domain type: Translated Jul 7 09:22:38.982076 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 09:22:38.982088 kernel: PCI: Using ACPI for IRQ routing Jul 7 09:22:38.982101 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 09:22:38.982113 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 09:22:38.982125 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 7 09:22:38.985360 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 09:22:38.985545 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 09:22:38.985733 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 09:22:38.985752 kernel: vgaarb: loaded Jul 7 09:22:38.985765 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 09:22:38.985778 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 09:22:38.985790 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 09:22:38.985802 kernel: pnp: PnP ACPI init Jul 7 09:22:38.986030 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 09:22:38.986059 kernel: pnp: PnP ACPI: found 5 devices Jul 7 09:22:38.986071 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 09:22:38.986084 kernel: NET: Registered PF_INET protocol family Jul 7 09:22:38.986096 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 09:22:38.986108 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 09:22:38.986121 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 09:22:38.986133 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 09:22:38.986145 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 09:22:38.986162 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 09:22:38.986174 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 09:22:38.986186 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 09:22:38.986219 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 09:22:38.986245 kernel: NET: Registered PF_XDP protocol family Jul 7 09:22:38.986433 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 09:22:38.986599 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 09:22:38.986772 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 7 09:22:38.986955 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 7 09:22:38.987133 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 09:22:38.989312 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 09:22:38.989490 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 09:22:38.989667 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jul 7 09:22:38.989919 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jul 7 09:22:38.990168 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jul 7 09:22:38.990367 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jul 7 09:22:38.990540 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jul 7 09:22:38.990701 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff]: assigned Jul 7 09:22:38.990872 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff]: assigned Jul 7 09:22:38.991034 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 09:22:38.995247 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jul 7 09:22:38.995456 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 09:22:38.995631 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:22:38.995826 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 09:22:38.996000 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jul 7 09:22:38.996164 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 09:22:38.996372 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:22:38.996545 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 09:22:38.996709 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Jul 7 09:22:38.997561 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 09:22:38.997734 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 09:22:38.997899 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 09:22:38.998124 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Jul 7 09:22:38.998340 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 09:22:38.998516 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 09:22:38.998679 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 09:22:38.998841 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Jul 7 09:22:38.999047 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 09:22:38.999210 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 09:22:38.999416 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 09:22:38.999581 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Jul 7 09:22:38.999743 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 09:22:38.999904 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 09:22:39.000066 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 09:22:39.000296 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Jul 7 09:22:39.000461 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 09:22:39.000632 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 09:22:39.000794 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 09:22:39.000954 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Jul 7 09:22:39.001115 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 09:22:39.001319 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 09:22:39.001484 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 09:22:39.001645 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Jul 7 09:22:39.001806 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 09:22:39.001966 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 09:22:39.002129 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 09:22:39.002321 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 09:22:39.002471 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 09:22:39.002618 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 7 09:22:39.002764 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 09:22:39.002912 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 7 09:22:39.003101 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Jul 7 09:22:39.003306 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 7 09:22:39.003462 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:22:39.004231 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Jul 7 09:22:39.004419 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 7 09:22:39.004581 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 09:22:39.004765 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Jul 7 09:22:39.004926 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 7 09:22:39.005091 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 09:22:39.006345 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Jul 7 09:22:39.006510 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 7 09:22:39.006667 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 09:22:39.006849 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Jul 7 09:22:39.007010 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 7 09:22:39.007170 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 09:22:39.007377 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Jul 7 09:22:39.007533 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 7 09:22:39.007693 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 09:22:39.007864 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Jul 7 09:22:39.008022 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 7 09:22:39.008184 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 09:22:39.008420 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Jul 7 09:22:39.008595 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 7 09:22:39.008759 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 09:22:39.008930 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Jul 7 09:22:39.009106 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 7 09:22:39.009309 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 09:22:39.009331 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 09:22:39.009351 kernel: PCI: CLS 0 bytes, default 64 Jul 7 09:22:39.009364 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 09:22:39.009377 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 7 09:22:39.009390 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 09:22:39.009403 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jul 7 09:22:39.009421 kernel: Initialise system trusted keyrings Jul 7 09:22:39.009434 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 09:22:39.009447 kernel: Key type asymmetric registered Jul 7 09:22:39.009459 kernel: Asymmetric key parser 'x509' registered Jul 7 09:22:39.009476 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 09:22:39.009489 kernel: io scheduler mq-deadline registered Jul 7 09:22:39.009502 kernel: io scheduler kyber registered Jul 7 09:22:39.009514 kernel: io scheduler bfq registered Jul 7 09:22:39.009691 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 7 09:22:39.009859 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 7 09:22:39.010023 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.010214 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 7 09:22:39.010425 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 7 09:22:39.010591 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.010763 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 7 09:22:39.010926 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 7 09:22:39.011088 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.011292 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 7 09:22:39.011466 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 7 09:22:39.011628 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.011790 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 7 09:22:39.011952 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 7 09:22:39.012113 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.012345 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 7 09:22:39.012508 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 7 09:22:39.013062 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.013247 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 7 09:22:39.013424 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 7 09:22:39.013588 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.013783 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 7 09:22:39.013946 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 7 09:22:39.014110 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 09:22:39.014130 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 09:22:39.014144 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 09:22:39.014157 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 09:22:39.014170 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 09:22:39.014183 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 09:22:39.014202 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 09:22:39.014625 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 09:22:39.014641 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 09:22:39.014654 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 09:22:39.014829 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 09:22:39.017330 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 09:22:39.017503 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T09:22:38 UTC (1751880158) Jul 7 09:22:39.017661 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 7 09:22:39.017689 kernel: intel_pstate: CPU model not supported Jul 7 09:22:39.017708 kernel: NET: Registered PF_INET6 protocol family Jul 7 09:22:39.017721 kernel: Segment Routing with IPv6 Jul 7 09:22:39.017734 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 09:22:39.017746 kernel: NET: Registered PF_PACKET protocol family Jul 7 09:22:39.017759 kernel: Key type dns_resolver registered Jul 7 09:22:39.017772 kernel: IPI shorthand broadcast: enabled Jul 7 09:22:39.017784 kernel: sched_clock: Marking stable (3644006145, 216396178)->(4004926284, -144523961) Jul 7 09:22:39.017798 kernel: registered taskstats version 1 Jul 7 09:22:39.017815 kernel: Loading compiled-in X.509 certificates Jul 7 09:22:39.017836 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 09:22:39.017848 kernel: Demotion targets for Node 0: null Jul 7 09:22:39.017861 kernel: Key type .fscrypt registered Jul 7 09:22:39.017873 kernel: Key type fscrypt-provisioning registered Jul 7 09:22:39.017894 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 09:22:39.017907 kernel: ima: Allocated hash algorithm: sha1 Jul 7 09:22:39.017919 kernel: ima: No architecture policies found Jul 7 09:22:39.017932 kernel: clk: Disabling unused clocks Jul 7 09:22:39.017949 kernel: Warning: unable to open an initial console. Jul 7 09:22:39.017962 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 09:22:39.017975 kernel: Write protecting the kernel read-only data: 24576k Jul 7 09:22:39.017988 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 09:22:39.018001 kernel: Run /init as init process Jul 7 09:22:39.018013 kernel: with arguments: Jul 7 09:22:39.018026 kernel: /init Jul 7 09:22:39.018038 kernel: with environment: Jul 7 09:22:39.018050 kernel: HOME=/ Jul 7 09:22:39.018067 kernel: TERM=linux Jul 7 09:22:39.018080 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 09:22:39.018094 systemd[1]: Successfully made /usr/ read-only. Jul 7 09:22:39.018111 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 09:22:39.018125 systemd[1]: Detected virtualization kvm. Jul 7 09:22:39.018138 systemd[1]: Detected architecture x86-64. Jul 7 09:22:39.018151 systemd[1]: Running in initrd. Jul 7 09:22:39.018169 systemd[1]: No hostname configured, using default hostname. Jul 7 09:22:39.018183 systemd[1]: Hostname set to . Jul 7 09:22:39.018225 systemd[1]: Initializing machine ID from VM UUID. Jul 7 09:22:39.018240 systemd[1]: Queued start job for default target initrd.target. Jul 7 09:22:39.018254 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 09:22:39.018267 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 09:22:39.018294 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 09:22:39.018308 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 09:22:39.018329 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 09:22:39.018344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 09:22:39.018359 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 09:22:39.018373 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 09:22:39.018387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 09:22:39.018401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 09:22:39.018414 systemd[1]: Reached target paths.target - Path Units. Jul 7 09:22:39.018433 systemd[1]: Reached target slices.target - Slice Units. Jul 7 09:22:39.018446 systemd[1]: Reached target swap.target - Swaps. Jul 7 09:22:39.018460 systemd[1]: Reached target timers.target - Timer Units. Jul 7 09:22:39.018474 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 09:22:39.018487 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 09:22:39.018501 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 09:22:39.018514 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 09:22:39.018528 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 09:22:39.018541 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 09:22:39.018572 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 09:22:39.018586 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 09:22:39.018599 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 09:22:39.018612 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 09:22:39.018638 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 09:22:39.018652 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 09:22:39.018665 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 09:22:39.018679 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 09:22:39.018698 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 09:22:39.018712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:22:39.018725 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 09:22:39.018776 systemd-journald[229]: Collecting audit messages is disabled. Jul 7 09:22:39.018814 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 09:22:39.018828 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 09:22:39.018842 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 09:22:39.018856 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 09:22:39.018874 kernel: Bridge firewalling registered Jul 7 09:22:39.018887 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 09:22:39.018901 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:22:39.018916 systemd-journald[229]: Journal started Jul 7 09:22:39.018940 systemd-journald[229]: Runtime Journal (/run/log/journal/4f6ccc75e68f49e497631825eb26744d) is 4.7M, max 38.2M, 33.4M free. Jul 7 09:22:38.951576 systemd-modules-load[231]: Inserted module 'overlay' Jul 7 09:22:38.995867 systemd-modules-load[231]: Inserted module 'br_netfilter' Jul 7 09:22:39.082868 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 09:22:39.116904 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 09:22:39.119962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:22:39.126400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 09:22:39.128965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 09:22:39.137406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 09:22:39.143095 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:22:39.152404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 09:22:39.156663 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 09:22:39.161422 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 09:22:39.165134 systemd-tmpfiles[254]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 09:22:39.173530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 09:22:39.178367 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 09:22:39.188754 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 09:22:39.234864 systemd-resolved[273]: Positive Trust Anchors: Jul 7 09:22:39.234887 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 09:22:39.234929 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 09:22:39.239051 systemd-resolved[273]: Defaulting to hostname 'linux'. Jul 7 09:22:39.240811 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 09:22:39.241704 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 09:22:39.311232 kernel: SCSI subsystem initialized Jul 7 09:22:39.323315 kernel: Loading iSCSI transport class v2.0-870. Jul 7 09:22:39.336219 kernel: iscsi: registered transport (tcp) Jul 7 09:22:39.362348 kernel: iscsi: registered transport (qla4xxx) Jul 7 09:22:39.362415 kernel: QLogic iSCSI HBA Driver Jul 7 09:22:39.387415 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 09:22:39.408046 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 09:22:39.411974 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 09:22:39.477460 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 09:22:39.480493 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 09:22:39.542353 kernel: raid6: sse2x4 gen() 13785 MB/s Jul 7 09:22:39.560304 kernel: raid6: sse2x2 gen() 9594 MB/s Jul 7 09:22:39.578843 kernel: raid6: sse2x1 gen() 9413 MB/s Jul 7 09:22:39.578880 kernel: raid6: using algorithm sse2x4 gen() 13785 MB/s Jul 7 09:22:39.597748 kernel: raid6: .... xor() 7803 MB/s, rmw enabled Jul 7 09:22:39.597815 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 09:22:39.622219 kernel: xor: automatically using best checksumming function avx Jul 7 09:22:39.812248 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 09:22:39.822667 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 09:22:39.826414 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 09:22:39.861610 systemd-udevd[479]: Using default interface naming scheme 'v255'. Jul 7 09:22:39.870409 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 09:22:39.875383 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 09:22:39.910490 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jul 7 09:22:39.944614 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 09:22:39.947418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 09:22:40.072421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 09:22:40.076397 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 09:22:40.189277 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jul 7 09:22:40.200454 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 7 09:22:40.216207 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 09:22:40.230699 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 09:22:40.230760 kernel: GPT:17805311 != 125829119 Jul 7 09:22:40.230780 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 09:22:40.232520 kernel: GPT:17805311 != 125829119 Jul 7 09:22:40.233546 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 09:22:40.235653 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:22:40.257285 kernel: libata version 3.00 loaded. Jul 7 09:22:40.263479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 09:22:40.263654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:22:40.266004 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:22:40.270608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:22:40.274503 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 09:22:40.278209 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 09:22:40.278288 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 09:22:40.285372 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 09:22:40.289209 kernel: AES CTR mode by8 optimization enabled Jul 7 09:22:40.302394 kernel: ACPI: bus type USB registered Jul 7 09:22:40.302440 kernel: usbcore: registered new interface driver usbfs Jul 7 09:22:40.302459 kernel: usbcore: registered new interface driver hub Jul 7 09:22:40.302476 kernel: usbcore: registered new device driver usb Jul 7 09:22:40.324252 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 09:22:40.324553 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 09:22:40.324789 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 09:22:40.345540 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 09:22:40.345854 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 09:22:40.355212 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 09:22:40.363269 kernel: scsi host0: ahci Jul 7 09:22:40.363335 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 09:22:40.365502 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 09:22:40.369220 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 09:22:40.379209 kernel: hub 1-0:1.0: USB hub found Jul 7 09:22:40.380205 kernel: hub 1-0:1.0: 4 ports detected Jul 7 09:22:40.380515 kernel: scsi host1: ahci Jul 7 09:22:40.382524 kernel: scsi host2: ahci Jul 7 09:22:40.386207 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 09:22:40.386564 kernel: hub 2-0:1.0: USB hub found Jul 7 09:22:40.386797 kernel: hub 2-0:1.0: 4 ports detected Jul 7 09:22:40.390262 kernel: scsi host3: ahci Jul 7 09:22:40.390505 kernel: scsi host4: ahci Jul 7 09:22:40.394633 kernel: scsi host5: ahci Jul 7 09:22:40.394944 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 0 Jul 7 09:22:40.394974 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 0 Jul 7 09:22:40.395012 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 0 Jul 7 09:22:40.395038 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 0 Jul 7 09:22:40.395062 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 0 Jul 7 09:22:40.395079 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 0 Jul 7 09:22:40.435667 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 09:22:40.493718 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:22:40.507346 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 09:22:40.533795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 09:22:40.534672 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 09:22:40.547587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 09:22:40.549798 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 09:22:40.577854 disk-uuid[631]: Primary Header is updated. Jul 7 09:22:40.577854 disk-uuid[631]: Secondary Entries is updated. Jul 7 09:22:40.577854 disk-uuid[631]: Secondary Header is updated. Jul 7 09:22:40.583284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:22:40.590232 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:22:40.624464 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 09:22:40.702515 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 09:22:40.702616 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 09:22:40.708888 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 09:22:40.708926 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 09:22:40.708943 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 09:22:40.710574 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 09:22:40.785206 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 09:22:40.792200 kernel: usbcore: registered new interface driver usbhid Jul 7 09:22:40.792243 kernel: usbhid: USB HID core driver Jul 7 09:22:40.799874 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jul 7 09:22:40.799933 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 7 09:22:40.823844 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 09:22:40.826470 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 09:22:40.827293 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 09:22:40.828890 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 09:22:40.831643 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 09:22:40.858409 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 09:22:41.593251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 09:22:41.595044 disk-uuid[632]: The operation has completed successfully. Jul 7 09:22:41.651280 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 09:22:41.651448 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 09:22:41.696867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 09:22:41.712224 sh[658]: Success Jul 7 09:22:41.736420 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 09:22:41.736488 kernel: device-mapper: uevent: version 1.0.3 Jul 7 09:22:41.739701 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 09:22:41.751306 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jul 7 09:22:41.814018 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 09:22:41.819570 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 09:22:41.829585 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 09:22:41.848016 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 09:22:41.848102 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (670) Jul 7 09:22:41.855253 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 09:22:41.855295 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:22:41.855314 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 09:22:41.866380 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 09:22:41.867697 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 09:22:41.868626 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 09:22:41.869784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 09:22:41.873490 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 09:22:41.912252 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (703) Jul 7 09:22:41.915451 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:22:41.918950 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:22:41.918981 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 09:22:41.929221 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:22:41.930795 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 09:22:41.934420 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 09:22:42.018892 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 09:22:42.024396 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 09:22:42.085858 systemd-networkd[841]: lo: Link UP Jul 7 09:22:42.085871 systemd-networkd[841]: lo: Gained carrier Jul 7 09:22:42.091502 systemd-networkd[841]: Enumeration completed Jul 7 09:22:42.091984 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:22:42.091990 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 09:22:42.093038 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 09:22:42.093792 systemd-networkd[841]: eth0: Link UP Jul 7 09:22:42.093797 systemd-networkd[841]: eth0: Gained carrier Jul 7 09:22:42.093808 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:22:42.094382 systemd[1]: Reached target network.target - Network. Jul 7 09:22:42.173539 systemd-networkd[841]: eth0: DHCPv4 address 10.243.72.42/30, gateway 10.243.72.41 acquired from 10.243.72.41 Jul 7 09:22:42.224681 ignition[762]: Ignition 2.21.0 Jul 7 09:22:42.224775 ignition[762]: Stage: fetch-offline Jul 7 09:22:42.225025 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:22:42.225059 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:22:42.225469 ignition[762]: parsed url from cmdline: "" Jul 7 09:22:42.225477 ignition[762]: no config URL provided Jul 7 09:22:42.225488 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 09:22:42.229049 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 09:22:42.225515 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jul 7 09:22:42.233389 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 09:22:42.225525 ignition[762]: failed to fetch config: resource requires networking Jul 7 09:22:42.225924 ignition[762]: Ignition finished successfully Jul 7 09:22:42.272754 ignition[851]: Ignition 2.21.0 Jul 7 09:22:42.272779 ignition[851]: Stage: fetch Jul 7 09:22:42.273031 ignition[851]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:22:42.273049 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:22:42.273222 ignition[851]: parsed url from cmdline: "" Jul 7 09:22:42.273231 ignition[851]: no config URL provided Jul 7 09:22:42.273241 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 09:22:42.273257 ignition[851]: no config at "/usr/lib/ignition/user.ign" Jul 7 09:22:42.273432 ignition[851]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 09:22:42.273920 ignition[851]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 09:22:42.273973 ignition[851]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 09:22:42.295081 ignition[851]: GET result: OK Jul 7 09:22:42.295885 ignition[851]: parsing config with SHA512: 10d5db9039b7b0e0fd8fb6af5f1075de4eb9ba2cb99bad7594e72f20c1d95393a060d0f15d22700ff7ddcefbae2a2de7b4b9265086c83e537ddfbc5ba72d5f94 Jul 7 09:22:42.305965 unknown[851]: fetched base config from "system" Jul 7 09:22:42.305981 unknown[851]: fetched base config from "system" Jul 7 09:22:42.306527 ignition[851]: fetch: fetch complete Jul 7 09:22:42.305989 unknown[851]: fetched user config from "openstack" Jul 7 09:22:42.306536 ignition[851]: fetch: fetch passed Jul 7 09:22:42.310367 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 09:22:42.306623 ignition[851]: Ignition finished successfully Jul 7 09:22:42.313395 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 09:22:42.371500 ignition[857]: Ignition 2.21.0 Jul 7 09:22:42.371525 ignition[857]: Stage: kargs Jul 7 09:22:42.371718 ignition[857]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:22:42.371738 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:22:42.373279 ignition[857]: kargs: kargs passed Jul 7 09:22:42.376732 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 09:22:42.373363 ignition[857]: Ignition finished successfully Jul 7 09:22:42.380018 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 09:22:42.409872 ignition[864]: Ignition 2.21.0 Jul 7 09:22:42.409897 ignition[864]: Stage: disks Jul 7 09:22:42.410074 ignition[864]: no configs at "/usr/lib/ignition/base.d" Jul 7 09:22:42.410092 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:22:42.415175 ignition[864]: disks: disks passed Jul 7 09:22:42.415928 ignition[864]: Ignition finished successfully Jul 7 09:22:42.418512 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 09:22:42.419586 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 09:22:42.420627 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 09:22:42.422282 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 09:22:42.423822 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 09:22:42.425242 systemd[1]: Reached target basic.target - Basic System. Jul 7 09:22:42.428013 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 09:22:42.459351 systemd-fsck[873]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jul 7 09:22:42.462733 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 09:22:42.465413 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 09:22:42.605240 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 09:22:42.606311 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 09:22:42.607533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 09:22:42.610059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 09:22:42.612469 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 09:22:42.615396 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 09:22:42.620908 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 09:22:42.622571 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 09:22:42.622614 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 09:22:42.628649 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 09:22:42.634359 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 09:22:42.639242 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (881) Jul 7 09:22:42.647543 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:22:42.647596 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:22:42.647615 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 09:22:42.654809 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 09:22:42.728348 initrd-setup-root[909]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 09:22:42.732746 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:22:42.738876 initrd-setup-root[917]: cut: /sysroot/etc/group: No such file or directory Jul 7 09:22:42.747650 initrd-setup-root[924]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 09:22:42.754760 initrd-setup-root[931]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 09:22:42.865977 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 09:22:42.868881 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 09:22:42.870578 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 09:22:42.891872 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 09:22:42.894216 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:22:42.923030 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 09:22:42.927699 ignition[999]: INFO : Ignition 2.21.0 Jul 7 09:22:42.929274 ignition[999]: INFO : Stage: mount Jul 7 09:22:42.929274 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 09:22:42.929274 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:22:42.935571 ignition[999]: INFO : mount: mount passed Jul 7 09:22:42.936272 ignition[999]: INFO : Ignition finished successfully Jul 7 09:22:42.937442 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 09:22:43.276988 systemd-networkd[841]: eth0: Gained IPv6LL Jul 7 09:22:43.758218 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:22:44.787863 systemd-networkd[841]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d20a:24:19ff:fef3:482a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d20a:24:19ff:fef3:482a/64 assigned by NDisc. Jul 7 09:22:44.787876 systemd-networkd[841]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 09:22:45.765223 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:22:49.772216 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:22:49.781491 coreos-metadata[883]: Jul 07 09:22:49.781 WARN failed to locate config-drive, using the metadata service API instead Jul 7 09:22:49.804714 coreos-metadata[883]: Jul 07 09:22:49.804 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 09:22:49.820326 coreos-metadata[883]: Jul 07 09:22:49.820 INFO Fetch successful Jul 7 09:22:49.821165 coreos-metadata[883]: Jul 07 09:22:49.821 INFO wrote hostname srv-et027.gb1.brightbox.com to /sysroot/etc/hostname Jul 7 09:22:49.823341 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 09:22:49.823524 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 09:22:49.827807 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 09:22:49.864724 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 09:22:49.890228 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Jul 7 09:22:49.893682 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 09:22:49.893744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 09:22:49.895470 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 09:22:49.901817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 09:22:49.937733 ignition[1033]: INFO : Ignition 2.21.0 Jul 7 09:22:49.937733 ignition[1033]: INFO : Stage: files Jul 7 09:22:49.951861 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 09:22:49.951861 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:22:49.960090 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Jul 7 09:22:49.966673 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 09:22:49.966673 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 09:22:49.970283 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 09:22:49.971492 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 09:22:49.972760 unknown[1033]: wrote ssh authorized keys file for user: core Jul 7 09:22:49.974078 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 09:22:49.975689 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 09:22:49.976975 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 09:22:50.409949 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 09:22:52.536800 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 09:22:52.539304 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 09:22:52.539304 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 7 09:22:53.142575 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 09:22:53.512346 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 09:22:53.513935 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 09:22:53.513935 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 09:22:53.513935 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 09:22:53.513935 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 09:22:53.513935 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 09:22:53.513935 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 09:22:53.520509 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 09:22:53.520509 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 09:22:53.520509 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 09:22:53.520509 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 09:22:53.520509 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 09:22:53.520509 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 09:22:53.520509 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 09:22:53.528841 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 09:22:54.218940 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 09:22:55.972121 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 09:22:55.972121 ignition[1033]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 09:22:55.977309 ignition[1033]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 09:22:55.985690 ignition[1033]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 09:22:55.985690 ignition[1033]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 09:22:55.985690 ignition[1033]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jul 7 09:22:55.989744 ignition[1033]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 09:22:55.989744 ignition[1033]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 09:22:55.989744 ignition[1033]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 09:22:55.989744 ignition[1033]: INFO : files: files passed Jul 7 09:22:55.989744 ignition[1033]: INFO : Ignition finished successfully Jul 7 09:22:55.993391 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 09:22:56.002527 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 09:22:56.004726 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 09:22:56.040498 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 09:22:56.040689 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 09:22:56.051770 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 09:22:56.054214 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 09:22:56.055772 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 09:22:56.056372 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 09:22:56.058782 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 09:22:56.061130 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 09:22:56.116472 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 09:22:56.116697 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 09:22:56.118652 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 09:22:56.119803 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 09:22:56.121531 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 09:22:56.124372 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 09:22:56.172027 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 09:22:56.175081 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 09:22:56.203492 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 09:22:56.205447 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 09:22:56.207235 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 09:22:56.208653 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 09:22:56.208876 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 09:22:56.211369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 09:22:56.212455 systemd[1]: Stopped target basic.target - Basic System. Jul 7 09:22:56.213692 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 09:22:56.214940 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 09:22:56.216665 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 09:22:56.218176 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 09:22:56.219705 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 09:22:56.221161 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 09:22:56.222704 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 09:22:56.224162 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 09:22:56.225517 systemd[1]: Stopped target swap.target - Swaps. Jul 7 09:22:56.226813 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 09:22:56.227198 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 09:22:56.228593 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 09:22:56.235062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 09:22:56.236658 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 09:22:56.237066 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 09:22:56.238230 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 09:22:56.238567 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 09:22:56.240272 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 09:22:56.240469 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 09:22:56.242470 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 09:22:56.242738 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 09:22:56.246472 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 09:22:56.249494 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 09:22:56.252556 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 09:22:56.252854 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 09:22:56.255799 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 09:22:56.256846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 09:22:56.267879 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 09:22:56.268049 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 09:22:56.315877 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 09:22:56.322578 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 09:22:56.322993 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 09:22:56.373251 ignition[1087]: INFO : Ignition 2.21.0 Jul 7 09:22:56.373251 ignition[1087]: INFO : Stage: umount Jul 7 09:22:56.373251 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 09:22:56.373251 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 09:22:56.378530 ignition[1087]: INFO : umount: umount passed Jul 7 09:22:56.378530 ignition[1087]: INFO : Ignition finished successfully Jul 7 09:22:56.378086 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 09:22:56.378335 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 09:22:56.379829 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 09:22:56.380015 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 09:22:56.381406 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 09:22:56.381507 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 09:22:56.383027 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 09:22:56.383116 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 09:22:56.384370 systemd[1]: Stopped target network.target - Network. Jul 7 09:22:56.385558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 09:22:56.385695 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 09:22:56.386980 systemd[1]: Stopped target paths.target - Path Units. Jul 7 09:22:56.388391 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 09:22:56.390345 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 09:22:56.391343 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 09:22:56.392631 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 09:22:56.393316 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 09:22:56.393455 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 09:22:56.394802 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 09:22:56.394873 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 09:22:56.396304 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 09:22:56.396394 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 09:22:56.397632 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 09:22:56.397752 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 09:22:56.399328 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 09:22:56.399409 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 09:22:56.402035 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 09:22:56.403821 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 09:22:56.408984 systemd-networkd[841]: eth0: DHCPv6 lease lost Jul 7 09:22:56.417695 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 09:22:56.417981 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 09:22:56.424425 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 09:22:56.424892 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 09:22:56.426087 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 09:22:56.428797 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 09:22:56.429844 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 09:22:56.430764 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 09:22:56.430890 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 09:22:56.433704 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 09:22:56.434844 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 09:22:56.434946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 09:22:56.437411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 09:22:56.437513 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:22:56.440756 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 09:22:56.440823 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 09:22:56.442618 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 09:22:56.442696 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 09:22:56.446437 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 09:22:56.452581 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 09:22:56.452707 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 09:22:56.457377 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 09:22:56.457722 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 09:22:56.458940 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 09:22:56.459019 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 09:22:56.459808 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 09:22:56.459876 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 09:22:56.461131 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 09:22:56.461222 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 09:22:56.464143 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 09:22:56.464295 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 09:22:56.465596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 09:22:56.465667 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 09:22:56.470350 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 09:22:56.471930 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 09:22:56.472055 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 09:22:56.476373 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 09:22:56.476445 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 09:22:56.478338 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 09:22:56.478442 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 09:22:56.487311 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 09:22:56.487380 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 09:22:56.488866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 09:22:56.488949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:22:56.494820 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 09:22:56.494918 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 7 09:22:56.495064 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 09:22:56.495146 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 09:22:56.497969 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 09:22:56.498128 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 09:22:56.501731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 09:22:56.501879 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 09:22:56.503777 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 09:22:56.507253 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 09:22:56.530170 systemd[1]: Switching root. Jul 7 09:22:56.559471 systemd-journald[229]: Journal stopped Jul 7 09:22:58.436942 systemd-journald[229]: Received SIGTERM from PID 1 (systemd). Jul 7 09:22:58.437069 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 09:22:58.437143 kernel: SELinux: policy capability open_perms=1 Jul 7 09:22:58.437171 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 09:22:58.439228 kernel: SELinux: policy capability always_check_network=0 Jul 7 09:22:58.439254 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 09:22:58.439306 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 09:22:58.439335 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 09:22:58.439353 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 09:22:58.439371 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 09:22:58.439395 kernel: audit: type=1403 audit(1751880176.998:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 09:22:58.439420 systemd[1]: Successfully loaded SELinux policy in 72.129ms. Jul 7 09:22:58.439452 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 24.222ms. Jul 7 09:22:58.439479 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 09:22:58.439500 systemd[1]: Detected virtualization kvm. Jul 7 09:22:58.439551 systemd[1]: Detected architecture x86-64. Jul 7 09:22:58.439574 systemd[1]: Detected first boot. Jul 7 09:22:58.439594 systemd[1]: Hostname set to . Jul 7 09:22:58.439613 systemd[1]: Initializing machine ID from VM UUID. Jul 7 09:22:58.439632 zram_generator::config[1130]: No configuration found. Jul 7 09:22:58.439653 kernel: Guest personality initialized and is inactive Jul 7 09:22:58.439671 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 09:22:58.439689 kernel: Initialized host personality Jul 7 09:22:58.439737 kernel: NET: Registered PF_VSOCK protocol family Jul 7 09:22:58.439760 systemd[1]: Populated /etc with preset unit settings. Jul 7 09:22:58.439781 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 09:22:58.439800 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 09:22:58.439832 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 09:22:58.439893 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 09:22:58.439963 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 09:22:58.439987 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 09:22:58.440006 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 09:22:58.440025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 09:22:58.440045 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 09:22:58.440065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 09:22:58.440084 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 09:22:58.440103 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 09:22:58.440149 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 09:22:58.440172 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 09:22:58.441232 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 09:22:58.441278 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 09:22:58.441301 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 09:22:58.441322 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 09:22:58.441372 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 09:22:58.441405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 09:22:58.441427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 09:22:58.441446 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 09:22:58.441466 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 09:22:58.441485 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 09:22:58.441504 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 09:22:58.441524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 09:22:58.441544 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 09:22:58.441563 systemd[1]: Reached target slices.target - Slice Units. Jul 7 09:22:58.441610 systemd[1]: Reached target swap.target - Swaps. Jul 7 09:22:58.441633 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 09:22:58.441653 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 09:22:58.441673 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 09:22:58.441693 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 09:22:58.441712 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 09:22:58.441732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 09:22:58.441753 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 09:22:58.441773 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 09:22:58.441829 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 09:22:58.441852 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 09:22:58.441872 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:22:58.441891 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 09:22:58.441911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 09:22:58.441930 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 09:22:58.441951 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 09:22:58.441971 systemd[1]: Reached target machines.target - Containers. Jul 7 09:22:58.442022 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 09:22:58.442046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:22:58.442065 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 09:22:58.442085 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 09:22:58.442104 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 09:22:58.442123 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 09:22:58.442143 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 09:22:58.442163 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 09:22:58.445035 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 09:22:58.445107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 09:22:58.445133 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 09:22:58.445154 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 09:22:58.445173 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 09:22:58.445218 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 09:22:58.445242 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:22:58.445263 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 09:22:58.445347 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 09:22:58.445372 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 09:22:58.445418 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 09:22:58.445442 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 09:22:58.445462 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 09:22:58.445482 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 09:22:58.445501 systemd[1]: Stopped verity-setup.service. Jul 7 09:22:58.445521 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:22:58.445541 kernel: ACPI: bus type drm_connector registered Jul 7 09:22:58.445560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 09:22:58.445605 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 09:22:58.445629 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 09:22:58.445649 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 09:22:58.445681 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 09:22:58.445703 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 09:22:58.445724 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 09:22:58.445743 kernel: loop: module loaded Jul 7 09:22:58.445761 kernel: fuse: init (API version 7.41) Jul 7 09:22:58.445780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 09:22:58.445854 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 09:22:58.445879 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 09:22:58.445899 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 09:22:58.445920 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 09:22:58.445939 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 09:22:58.445959 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 09:22:58.445978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 09:22:58.445997 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 09:22:58.446042 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 09:22:58.446089 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 09:22:58.446138 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 09:22:58.446161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 09:22:58.446197 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 09:22:58.446221 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 09:22:58.446269 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 09:22:58.446292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 09:22:58.446312 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 09:22:58.446332 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 09:22:58.446379 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 09:22:58.446403 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 09:22:58.446424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:22:58.446482 systemd-journald[1220]: Collecting audit messages is disabled. Jul 7 09:22:58.446527 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 09:22:58.446549 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 09:22:58.446569 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 09:22:58.446619 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 09:22:58.446643 systemd-journald[1220]: Journal started Jul 7 09:22:58.446676 systemd-journald[1220]: Runtime Journal (/run/log/journal/4f6ccc75e68f49e497631825eb26744d) is 4.7M, max 38.2M, 33.4M free. Jul 7 09:22:57.916073 systemd[1]: Queued start job for default target multi-user.target. Jul 7 09:22:57.931500 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 09:22:57.932455 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 09:22:58.460262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:22:58.460323 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 09:22:58.468311 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 09:22:58.490262 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 09:22:58.491992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 09:22:58.496635 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 09:22:58.497746 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 09:22:58.498961 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 09:22:58.500931 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 09:22:58.529526 kernel: loop0: detected capacity change from 0 to 8 Jul 7 09:22:58.540132 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 09:22:58.545002 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 09:22:58.549368 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 09:22:58.555549 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 09:22:58.559897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:22:58.561204 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 09:22:58.586200 kernel: loop1: detected capacity change from 0 to 221472 Jul 7 09:22:58.622522 systemd-journald[1220]: Time spent on flushing to /var/log/journal/4f6ccc75e68f49e497631825eb26744d is 32.345ms for 1179 entries. Jul 7 09:22:58.622522 systemd-journald[1220]: System Journal (/var/log/journal/4f6ccc75e68f49e497631825eb26744d) is 8M, max 584.8M, 576.8M free. Jul 7 09:22:58.674836 systemd-journald[1220]: Received client request to flush runtime journal. Jul 7 09:22:58.674894 kernel: loop2: detected capacity change from 0 to 113872 Jul 7 09:22:58.633285 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 09:22:58.655501 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 7 09:22:58.655522 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 7 09:22:58.688402 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 09:22:58.696056 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 09:22:58.700384 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 09:22:58.743218 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 09:22:58.827212 kernel: loop4: detected capacity change from 0 to 8 Jul 7 09:22:58.824618 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 09:22:58.829432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 09:22:58.912236 kernel: loop5: detected capacity change from 0 to 221472 Jul 7 09:22:58.930754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 09:22:58.938250 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 09:22:58.948531 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jul 7 09:22:58.948569 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jul 7 09:22:58.955175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 09:22:58.966084 kernel: loop6: detected capacity change from 0 to 113872 Jul 7 09:22:58.992219 kernel: loop7: detected capacity change from 0 to 146240 Jul 7 09:22:59.028560 (sd-merge)[1289]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 09:22:59.031937 (sd-merge)[1289]: Merged extensions into '/usr'. Jul 7 09:22:59.047400 systemd[1]: Reload requested from client PID 1249 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 09:22:59.047538 systemd[1]: Reloading... Jul 7 09:22:59.254334 zram_generator::config[1323]: No configuration found. Jul 7 09:22:59.480459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:22:59.643215 ldconfig[1245]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 09:22:59.688744 systemd[1]: Reloading finished in 640 ms. Jul 7 09:22:59.708692 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 09:22:59.711844 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 09:22:59.730393 systemd[1]: Starting ensure-sysext.service... Jul 7 09:22:59.734418 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 09:22:59.784072 systemd[1]: Reload requested from client PID 1376 ('systemctl') (unit ensure-sysext.service)... Jul 7 09:22:59.784317 systemd[1]: Reloading... Jul 7 09:22:59.810936 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 09:22:59.810996 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 09:22:59.811891 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 09:22:59.812401 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 09:22:59.814276 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 09:22:59.814974 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jul 7 09:22:59.815138 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jul 7 09:22:59.823072 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 09:22:59.823091 systemd-tmpfiles[1377]: Skipping /boot Jul 7 09:22:59.855500 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 09:22:59.855522 systemd-tmpfiles[1377]: Skipping /boot Jul 7 09:22:59.977280 zram_generator::config[1404]: No configuration found. Jul 7 09:23:00.134244 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:23:00.259414 systemd[1]: Reloading finished in 474 ms. Jul 7 09:23:00.285161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 09:23:00.310585 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 09:23:00.323521 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 09:23:00.330123 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 09:23:00.337556 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 09:23:00.343494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 09:23:00.348236 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 09:23:00.354312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 09:23:00.360960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:23:00.362500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:23:00.371809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 09:23:00.377092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 09:23:00.388302 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 09:23:00.389497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:23:00.389686 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:23:00.389875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:23:00.401860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:23:00.403545 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:23:00.403789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:23:00.404369 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:23:00.404530 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:23:00.419512 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 09:23:00.427066 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:23:00.427847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 09:23:00.443533 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 09:23:00.445622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 09:23:00.445794 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 09:23:00.446033 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 09:23:00.449156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 09:23:00.450546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 09:23:00.454672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 09:23:00.454955 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 09:23:00.457344 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 09:23:00.457631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 09:23:00.472656 systemd[1]: Finished ensure-sysext.service. Jul 7 09:23:00.475054 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 09:23:00.476984 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 09:23:00.487399 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 09:23:00.490172 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 09:23:00.490359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 09:23:00.495152 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 09:23:00.511065 systemd-udevd[1466]: Using default interface naming scheme 'v255'. Jul 7 09:23:00.512289 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 09:23:00.514865 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 09:23:00.521288 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 09:23:00.522099 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 09:23:00.563315 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 09:23:00.566548 augenrules[1503]: No rules Jul 7 09:23:00.570540 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 09:23:00.570892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 09:23:00.577718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 09:23:00.583840 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 09:23:00.587498 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 09:23:00.876567 systemd-resolved[1465]: Positive Trust Anchors: Jul 7 09:23:00.876591 systemd-resolved[1465]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 09:23:00.876644 systemd-resolved[1465]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 09:23:00.906228 systemd-resolved[1465]: Using system hostname 'srv-et027.gb1.brightbox.com'. Jul 7 09:23:00.909355 systemd-networkd[1516]: lo: Link UP Jul 7 09:23:00.909367 systemd-networkd[1516]: lo: Gained carrier Jul 7 09:23:00.911014 systemd-networkd[1516]: Enumeration completed Jul 7 09:23:00.911160 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 09:23:00.925388 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 09:23:00.932539 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 09:23:00.935455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 09:23:00.936485 systemd[1]: Reached target network.target - Network. Jul 7 09:23:00.937921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 09:23:00.957627 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 09:23:00.959347 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 09:23:00.961687 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 09:23:00.963489 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 09:23:00.965274 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 09:23:00.966293 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 09:23:00.969349 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 09:23:00.969398 systemd[1]: Reached target paths.target - Path Units. Jul 7 09:23:00.970044 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 09:23:00.971078 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 09:23:00.972821 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 09:23:00.974291 systemd[1]: Reached target timers.target - Timer Units. Jul 7 09:23:00.980387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 09:23:00.987573 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 09:23:00.998057 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 09:23:01.001261 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 09:23:01.004270 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 09:23:01.015738 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 09:23:01.018962 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 09:23:01.024802 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 09:23:01.026331 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 09:23:01.048460 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 09:23:01.048703 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 09:23:01.051294 systemd[1]: Reached target basic.target - Basic System. Jul 7 09:23:01.053333 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 09:23:01.053387 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 09:23:01.058307 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 09:23:01.066510 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 09:23:01.072594 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 09:23:01.079714 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 09:23:01.088013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 09:23:01.092404 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 09:23:01.093308 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 09:23:01.101938 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 09:23:01.108698 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 09:23:01.115499 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 09:23:01.120333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 09:23:01.125493 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 09:23:01.128208 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:01.137647 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 09:23:01.145022 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 09:23:01.145943 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 09:23:01.156283 jq[1558]: false Jul 7 09:23:01.157207 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 09:23:01.169686 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 09:23:01.176221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 09:23:01.177741 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 09:23:01.179637 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 09:23:01.189847 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jul 7 09:23:01.185965 oslogin_cache_refresh[1560]: Refreshing passwd entry cache Jul 7 09:23:01.190608 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting users, quitting Jul 7 09:23:01.190608 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 09:23:01.190608 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Refreshing group entry cache Jul 7 09:23:01.190097 oslogin_cache_refresh[1560]: Failure getting users, quitting Jul 7 09:23:01.190133 oslogin_cache_refresh[1560]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 09:23:01.190264 oslogin_cache_refresh[1560]: Refreshing group entry cache Jul 7 09:23:01.195821 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Failure getting groups, quitting Jul 7 09:23:01.195821 google_oslogin_nss_cache[1560]: oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 09:23:01.195458 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 09:23:01.192316 oslogin_cache_refresh[1560]: Failure getting groups, quitting Jul 7 09:23:01.192330 oslogin_cache_refresh[1560]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 09:23:01.196307 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 09:23:01.205824 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 09:23:01.207878 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 09:23:01.243678 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 09:23:01.244441 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 09:23:01.263923 extend-filesystems[1559]: Found /dev/vda6 Jul 7 09:23:01.270387 jq[1572]: true Jul 7 09:23:01.267916 dbus-daemon[1556]: [system] SELinux support is enabled Jul 7 09:23:01.270980 tar[1575]: linux-amd64/helm Jul 7 09:23:01.271706 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 09:23:01.296029 extend-filesystems[1559]: Found /dev/vda9 Jul 7 09:23:01.295170 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 09:23:01.295233 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 09:23:01.297324 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 09:23:01.297361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 09:23:01.339291 jq[1594]: true Jul 7 09:23:01.364971 extend-filesystems[1559]: Checking size of /dev/vda9 Jul 7 09:23:01.367808 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 09:23:01.396312 update_engine[1568]: I20250707 09:23:01.395581 1568 main.cc:92] Flatcar Update Engine starting Jul 7 09:23:01.409487 systemd[1]: Started update-engine.service - Update Engine. Jul 7 09:23:01.415324 update_engine[1568]: I20250707 09:23:01.411787 1568 update_check_scheduler.cc:74] Next update check in 3m3s Jul 7 09:23:01.423510 extend-filesystems[1559]: Resized partition /dev/vda9 Jul 7 09:23:01.427320 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 09:23:01.440315 extend-filesystems[1618]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 09:23:01.456364 systemd-networkd[1516]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:23:01.466863 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 09:23:01.469390 systemd-networkd[1516]: eth0: Link UP Jul 7 09:23:01.469830 systemd-networkd[1516]: eth0: Gained carrier Jul 7 09:23:01.469887 systemd-networkd[1516]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 09:23:01.478220 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 7 09:23:01.511334 systemd-networkd[1516]: eth0: DHCPv4 address 10.243.72.42/30, gateway 10.243.72.41 acquired from 10.243.72.41 Jul 7 09:23:01.515609 dbus-daemon[1556]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1516 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 09:23:01.519133 systemd-timesyncd[1487]: Network configuration changed, trying to establish connection. Jul 7 09:23:01.526481 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 09:23:02.352857 systemd-resolved[1465]: Clock change detected. Flushing caches. Jul 7 09:23:02.379181 systemd-timesyncd[1487]: Contacted time server 85.199.214.101:123 (0.flatcar.pool.ntp.org). Jul 7 09:23:02.379383 systemd-timesyncd[1487]: Initial clock synchronization to Mon 2025-07-07 09:23:02.352338 UTC. Jul 7 09:23:02.430433 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 09:23:02.440078 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 09:23:02.441804 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Jul 7 09:23:02.455070 systemd[1]: Starting sshkeys.service... Jul 7 09:23:02.464752 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 09:23:02.481614 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 09:23:02.539539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 09:23:02.560355 systemd-logind[1566]: New seat seat0. Jul 7 09:23:02.566921 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 09:23:02.576481 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 09:23:02.579900 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 09:23:02.648721 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:02.682534 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 7 09:23:02.721986 extend-filesystems[1618]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 09:23:02.721986 extend-filesystems[1618]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 7 09:23:02.721986 extend-filesystems[1618]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 7 09:23:02.737481 extend-filesystems[1559]: Resized filesystem in /dev/vda9 Jul 7 09:23:02.725473 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 09:23:02.725403 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 09:23:02.729241 dbus-daemon[1556]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1621 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 09:23:02.730666 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 09:23:02.731747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 09:23:02.751287 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 09:23:02.898129 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 09:23:02.917257 locksmithd[1612]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 09:23:02.950386 containerd[1593]: time="2025-07-07T09:23:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 09:23:02.955130 containerd[1593]: time="2025-07-07T09:23:02.954073297Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 09:23:02.989464 containerd[1593]: time="2025-07-07T09:23:02.987087522Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.82µs" Jul 7 09:23:02.989464 containerd[1593]: time="2025-07-07T09:23:02.989235048Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 09:23:02.989464 containerd[1593]: time="2025-07-07T09:23:02.989280650Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.991352618Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.991434681Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.991506279Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.991657138Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.991685914Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.992047484Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.992115081Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.992153352Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.992173078Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992523 containerd[1593]: time="2025-07-07T09:23:02.992367291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992906 containerd[1593]: time="2025-07-07T09:23:02.992853245Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992975 containerd[1593]: time="2025-07-07T09:23:02.992901167Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 09:23:02.992975 containerd[1593]: time="2025-07-07T09:23:02.992938713Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 09:23:02.993034 containerd[1593]: time="2025-07-07T09:23:02.992984321Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 09:23:03.003641 containerd[1593]: time="2025-07-07T09:23:03.003596626Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 09:23:03.003780 containerd[1593]: time="2025-07-07T09:23:03.003754544Z" level=info msg="metadata content store policy set" policy=shared Jul 7 09:23:03.012433 containerd[1593]: time="2025-07-07T09:23:03.012392372Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 09:23:03.012550 containerd[1593]: time="2025-07-07T09:23:03.012510097Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 09:23:03.012550 containerd[1593]: time="2025-07-07T09:23:03.012544922Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 09:23:03.012627 containerd[1593]: time="2025-07-07T09:23:03.012567766Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 09:23:03.012627 containerd[1593]: time="2025-07-07T09:23:03.012597380Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 09:23:03.012627 containerd[1593]: time="2025-07-07T09:23:03.012614707Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 09:23:03.012759 containerd[1593]: time="2025-07-07T09:23:03.012678087Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 09:23:03.012759 containerd[1593]: time="2025-07-07T09:23:03.012702390Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 09:23:03.012759 containerd[1593]: time="2025-07-07T09:23:03.012719505Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 09:23:03.012759 containerd[1593]: time="2025-07-07T09:23:03.012735572Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 09:23:03.012759 containerd[1593]: time="2025-07-07T09:23:03.012750539Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 09:23:03.012899 containerd[1593]: time="2025-07-07T09:23:03.012770640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 09:23:03.013174 containerd[1593]: time="2025-07-07T09:23:03.012981728Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 09:23:03.013174 containerd[1593]: time="2025-07-07T09:23:03.013019380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 09:23:03.013174 containerd[1593]: time="2025-07-07T09:23:03.013046206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 09:23:03.013174 containerd[1593]: time="2025-07-07T09:23:03.013122368Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 09:23:03.013174 containerd[1593]: time="2025-07-07T09:23:03.013148039Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 09:23:03.013174 containerd[1593]: time="2025-07-07T09:23:03.013165735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 09:23:03.013385 containerd[1593]: time="2025-07-07T09:23:03.013183319Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 09:23:03.013385 containerd[1593]: time="2025-07-07T09:23:03.013224998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 09:23:03.013385 containerd[1593]: time="2025-07-07T09:23:03.013259736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 09:23:03.013385 containerd[1593]: time="2025-07-07T09:23:03.013277747Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 09:23:03.013385 containerd[1593]: time="2025-07-07T09:23:03.013316806Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 09:23:03.013566 containerd[1593]: time="2025-07-07T09:23:03.013408846Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 09:23:03.015435 containerd[1593]: time="2025-07-07T09:23:03.015155430Z" level=info msg="Start snapshots syncer" Jul 7 09:23:03.016592 containerd[1593]: time="2025-07-07T09:23:03.015379820Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 09:23:03.021474 containerd[1593]: time="2025-07-07T09:23:03.021401496Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.021519663Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.021721141Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.021929628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022004867Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022055207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022086932Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022134547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022162382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022179547Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022233005Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 09:23:03.022307 containerd[1593]: time="2025-07-07T09:23:03.022255242Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 09:23:03.023173 containerd[1593]: time="2025-07-07T09:23:03.022317071Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 09:23:03.023222 containerd[1593]: time="2025-07-07T09:23:03.023170020Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 09:23:03.023276 containerd[1593]: time="2025-07-07T09:23:03.023224661Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 09:23:03.023276 containerd[1593]: time="2025-07-07T09:23:03.023244858Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 09:23:03.023276 containerd[1593]: time="2025-07-07T09:23:03.023260093Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 09:23:03.023384 containerd[1593]: time="2025-07-07T09:23:03.023273892Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 09:23:03.023384 containerd[1593]: time="2025-07-07T09:23:03.023310254Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 09:23:03.023384 containerd[1593]: time="2025-07-07T09:23:03.023329427Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 09:23:03.025304 containerd[1593]: time="2025-07-07T09:23:03.023354661Z" level=info msg="runtime interface created" Jul 7 09:23:03.025304 containerd[1593]: time="2025-07-07T09:23:03.025174601Z" level=info msg="created NRI interface" Jul 7 09:23:03.028440 containerd[1593]: time="2025-07-07T09:23:03.028047828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 09:23:03.028440 containerd[1593]: time="2025-07-07T09:23:03.028120515Z" level=info msg="Connect containerd service" Jul 7 09:23:03.028440 containerd[1593]: time="2025-07-07T09:23:03.028184329Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 09:23:03.039517 containerd[1593]: time="2025-07-07T09:23:03.036378095Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 09:23:03.065044 polkitd[1637]: Started polkitd version 126 Jul 7 09:23:03.074298 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 09:23:03.109278 polkitd[1637]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 09:23:03.112544 kernel: ACPI: button: Power Button [PWRF] Jul 7 09:23:03.114871 polkitd[1637]: Loading rules from directory /run/polkit-1/rules.d Jul 7 09:23:03.118549 polkitd[1637]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 09:23:03.124723 polkitd[1637]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jul 7 09:23:03.124782 polkitd[1637]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jul 7 09:23:03.124901 polkitd[1637]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 09:23:03.135579 polkitd[1637]: Finished loading, compiling and executing 2 rules Jul 7 09:23:03.139560 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 09:23:03.142438 dbus-daemon[1556]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 09:23:03.150549 polkitd[1637]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 09:23:03.254738 systemd-hostnamed[1621]: Hostname set to (static) Jul 7 09:23:03.298756 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 09:23:03.304786 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 09:23:03.334048 systemd-networkd[1516]: eth0: Gained IPv6LL Jul 7 09:23:03.385981 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 09:23:03.405542 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 09:23:03.421762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:03.428819 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 09:23:03.442357 containerd[1593]: time="2025-07-07T09:23:03.441181053Z" level=info msg="Start subscribing containerd event" Jul 7 09:23:03.442357 containerd[1593]: time="2025-07-07T09:23:03.442068048Z" level=info msg="Start recovering state" Jul 7 09:23:03.443923 containerd[1593]: time="2025-07-07T09:23:03.443422533Z" level=info msg="Start event monitor" Jul 7 09:23:03.443923 containerd[1593]: time="2025-07-07T09:23:03.443780681Z" level=info msg="Start cni network conf syncer for default" Jul 7 09:23:03.443923 containerd[1593]: time="2025-07-07T09:23:03.443799823Z" level=info msg="Start streaming server" Jul 7 09:23:03.443923 containerd[1593]: time="2025-07-07T09:23:03.443877868Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 09:23:03.444364 containerd[1593]: time="2025-07-07T09:23:03.444335832Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 09:23:03.444598 containerd[1593]: time="2025-07-07T09:23:03.444574400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 09:23:03.446768 containerd[1593]: time="2025-07-07T09:23:03.446358987Z" level=info msg="runtime interface starting up..." Jul 7 09:23:03.446768 containerd[1593]: time="2025-07-07T09:23:03.446390536Z" level=info msg="starting plugins..." Jul 7 09:23:03.446768 containerd[1593]: time="2025-07-07T09:23:03.446467634Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 09:23:03.447467 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 09:23:03.449041 containerd[1593]: time="2025-07-07T09:23:03.449016584Z" level=info msg="containerd successfully booted in 0.501927s" Jul 7 09:23:03.564178 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 09:23:03.899636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 09:23:04.146967 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 09:23:04.232165 systemd-logind[1566]: Watching system buttons on /dev/input/event3 (Power Button) Jul 7 09:23:04.645459 tar[1575]: linux-amd64/LICENSE Jul 7 09:23:04.645459 tar[1575]: linux-amd64/README.md Jul 7 09:23:04.692660 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 09:23:04.710261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 09:23:04.847289 systemd-networkd[1516]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d20a:24:19ff:fef3:482a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d20a:24:19ff:fef3:482a/64 assigned by NDisc. Jul 7 09:23:04.847304 systemd-networkd[1516]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 09:23:05.082763 sshd_keygen[1591]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 09:23:05.112469 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 09:23:05.135725 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 09:23:05.139558 systemd[1]: Started sshd@0-10.243.72.42:22-139.178.89.65:43382.service - OpenSSH per-connection server daemon (139.178.89.65:43382). Jul 7 09:23:05.213988 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 09:23:05.217044 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 09:23:05.224090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 09:23:05.259776 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 09:23:05.264687 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 09:23:05.275407 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 09:23:05.276588 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 09:23:05.493323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:05.507700 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:23:05.656182 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:05.674719 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:06.117124 sshd[1709]: Accepted publickey for core from 139.178.89.65 port 43382 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:06.119709 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:06.135148 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 09:23:06.137999 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 09:23:06.168300 systemd-logind[1566]: New session 1 of user core. Jul 7 09:23:06.206743 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 09:23:06.218614 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 09:23:06.247195 (systemd)[1733]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 09:23:06.253815 systemd-logind[1566]: New session c1 of user core. Jul 7 09:23:06.313250 kubelet[1723]: E0707 09:23:06.313122 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:23:06.317705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:23:06.318329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:23:06.319263 systemd[1]: kubelet.service: Consumed 1.519s CPU time, 266M memory peak. Jul 7 09:23:06.460008 systemd[1733]: Queued start job for default target default.target. Jul 7 09:23:06.467982 systemd[1733]: Created slice app.slice - User Application Slice. Jul 7 09:23:06.468029 systemd[1733]: Reached target paths.target - Paths. Jul 7 09:23:06.468185 systemd[1733]: Reached target timers.target - Timers. Jul 7 09:23:06.470729 systemd[1733]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 09:23:06.486804 systemd[1733]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 09:23:06.487006 systemd[1733]: Reached target sockets.target - Sockets. Jul 7 09:23:06.487115 systemd[1733]: Reached target basic.target - Basic System. Jul 7 09:23:06.487204 systemd[1733]: Reached target default.target - Main User Target. Jul 7 09:23:06.487273 systemd[1733]: Startup finished in 218ms. Jul 7 09:23:06.487285 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 09:23:06.502420 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 09:23:07.133681 systemd[1]: Started sshd@1-10.243.72.42:22-139.178.89.65:48534.service - OpenSSH per-connection server daemon (139.178.89.65:48534). Jul 7 09:23:07.693136 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:07.724137 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:08.053384 sshd[1745]: Accepted publickey for core from 139.178.89.65 port 48534 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:08.054931 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:08.062757 systemd-logind[1566]: New session 2 of user core. Jul 7 09:23:08.075538 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 09:23:08.671146 sshd[1749]: Connection closed by 139.178.89.65 port 48534 Jul 7 09:23:08.671605 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Jul 7 09:23:08.676941 systemd[1]: sshd@1-10.243.72.42:22-139.178.89.65:48534.service: Deactivated successfully. Jul 7 09:23:08.679205 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 09:23:08.680428 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Jul 7 09:23:08.682289 systemd-logind[1566]: Removed session 2. Jul 7 09:23:08.832479 systemd[1]: Started sshd@2-10.243.72.42:22-139.178.89.65:48548.service - OpenSSH per-connection server daemon (139.178.89.65:48548). Jul 7 09:23:09.761761 sshd[1755]: Accepted publickey for core from 139.178.89.65 port 48548 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:09.762991 sshd-session[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:09.770872 systemd-logind[1566]: New session 3 of user core. Jul 7 09:23:09.779456 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 09:23:10.368811 login[1717]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 09:23:10.370051 login[1716]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 09:23:10.378404 systemd-logind[1566]: New session 4 of user core. Jul 7 09:23:10.385196 sshd[1757]: Connection closed by 139.178.89.65 port 48548 Jul 7 09:23:10.386197 sshd-session[1755]: pam_unix(sshd:session): session closed for user core Jul 7 09:23:10.386485 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 09:23:10.393784 systemd-logind[1566]: New session 5 of user core. Jul 7 09:23:10.398466 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 09:23:10.401790 systemd[1]: sshd@2-10.243.72.42:22-139.178.89.65:48548.service: Deactivated successfully. Jul 7 09:23:10.404514 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 09:23:10.406940 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Jul 7 09:23:10.411965 systemd-logind[1566]: Removed session 3. Jul 7 09:23:11.743138 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:11.752131 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jul 7 09:23:11.761470 coreos-metadata[1632]: Jul 07 09:23:11.761 WARN failed to locate config-drive, using the metadata service API instead Jul 7 09:23:11.769142 coreos-metadata[1554]: Jul 07 09:23:11.768 WARN failed to locate config-drive, using the metadata service API instead Jul 7 09:23:11.787984 coreos-metadata[1632]: Jul 07 09:23:11.787 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 09:23:11.791333 coreos-metadata[1554]: Jul 07 09:23:11.791 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 09:23:11.799029 coreos-metadata[1554]: Jul 07 09:23:11.798 INFO Fetch failed with 404: resource not found Jul 7 09:23:11.799029 coreos-metadata[1554]: Jul 07 09:23:11.798 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 09:23:11.799822 coreos-metadata[1554]: Jul 07 09:23:11.799 INFO Fetch successful Jul 7 09:23:11.799822 coreos-metadata[1554]: Jul 07 09:23:11.799 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 09:23:11.818915 coreos-metadata[1554]: Jul 07 09:23:11.818 INFO Fetch successful Jul 7 09:23:11.819086 coreos-metadata[1554]: Jul 07 09:23:11.819 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 09:23:11.835680 coreos-metadata[1632]: Jul 07 09:23:11.835 INFO Fetch successful Jul 7 09:23:11.836048 coreos-metadata[1632]: Jul 07 09:23:11.836 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 09:23:11.847754 coreos-metadata[1554]: Jul 07 09:23:11.847 INFO Fetch successful Jul 7 09:23:11.847927 coreos-metadata[1554]: Jul 07 09:23:11.847 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 09:23:11.866040 coreos-metadata[1554]: Jul 07 09:23:11.865 INFO Fetch successful Jul 7 09:23:11.866229 coreos-metadata[1554]: Jul 07 09:23:11.866 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 09:23:11.871157 coreos-metadata[1632]: Jul 07 09:23:11.871 INFO Fetch successful Jul 7 09:23:11.873301 unknown[1632]: wrote ssh authorized keys file for user: core Jul 7 09:23:11.884685 coreos-metadata[1554]: Jul 07 09:23:11.884 INFO Fetch successful Jul 7 09:23:11.898320 update-ssh-keys[1792]: Updated "/home/core/.ssh/authorized_keys" Jul 7 09:23:11.901349 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 09:23:11.905923 systemd[1]: Finished sshkeys.service. Jul 7 09:23:11.923358 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 09:23:11.924929 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 09:23:11.925594 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 09:23:11.926243 systemd[1]: Startup finished in 3.724s (kernel) + 18.331s (initrd) + 14.271s (userspace) = 36.327s. Jul 7 09:23:16.569692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 09:23:16.574304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:16.924401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:16.938811 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:23:17.018398 kubelet[1809]: E0707 09:23:17.018252 1809 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:23:17.023090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:23:17.023358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:23:17.024264 systemd[1]: kubelet.service: Consumed 383ms CPU time, 111.1M memory peak. Jul 7 09:23:20.547475 systemd[1]: Started sshd@3-10.243.72.42:22-139.178.89.65:41306.service - OpenSSH per-connection server daemon (139.178.89.65:41306). Jul 7 09:23:21.466044 sshd[1817]: Accepted publickey for core from 139.178.89.65 port 41306 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:21.468347 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:21.477050 systemd-logind[1566]: New session 6 of user core. Jul 7 09:23:21.488460 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 09:23:22.088937 sshd[1819]: Connection closed by 139.178.89.65 port 41306 Jul 7 09:23:22.088087 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jul 7 09:23:22.093277 systemd[1]: sshd@3-10.243.72.42:22-139.178.89.65:41306.service: Deactivated successfully. Jul 7 09:23:22.095413 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 09:23:22.096655 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Jul 7 09:23:22.098614 systemd-logind[1566]: Removed session 6. Jul 7 09:23:22.257085 systemd[1]: Started sshd@4-10.243.72.42:22-139.178.89.65:41318.service - OpenSSH per-connection server daemon (139.178.89.65:41318). Jul 7 09:23:23.170657 sshd[1825]: Accepted publickey for core from 139.178.89.65 port 41318 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:23.172542 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:23.180796 systemd-logind[1566]: New session 7 of user core. Jul 7 09:23:23.187314 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 09:23:23.790247 sshd[1827]: Connection closed by 139.178.89.65 port 41318 Jul 7 09:23:23.790023 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jul 7 09:23:23.796284 systemd[1]: sshd@4-10.243.72.42:22-139.178.89.65:41318.service: Deactivated successfully. Jul 7 09:23:23.798706 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 09:23:23.799933 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Jul 7 09:23:23.802197 systemd-logind[1566]: Removed session 7. Jul 7 09:23:23.951472 systemd[1]: Started sshd@5-10.243.72.42:22-139.178.89.65:41320.service - OpenSSH per-connection server daemon (139.178.89.65:41320). Jul 7 09:23:24.886637 sshd[1833]: Accepted publickey for core from 139.178.89.65 port 41320 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:24.888960 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:24.896441 systemd-logind[1566]: New session 8 of user core. Jul 7 09:23:24.907444 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 09:23:25.513826 sshd[1835]: Connection closed by 139.178.89.65 port 41320 Jul 7 09:23:25.514938 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jul 7 09:23:25.520594 systemd[1]: sshd@5-10.243.72.42:22-139.178.89.65:41320.service: Deactivated successfully. Jul 7 09:23:25.523237 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 09:23:25.524413 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Jul 7 09:23:25.526645 systemd-logind[1566]: Removed session 8. Jul 7 09:23:25.671257 systemd[1]: Started sshd@6-10.243.72.42:22-139.178.89.65:41332.service - OpenSSH per-connection server daemon (139.178.89.65:41332). Jul 7 09:23:26.587263 sshd[1841]: Accepted publickey for core from 139.178.89.65 port 41332 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:26.589115 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:26.596189 systemd-logind[1566]: New session 9 of user core. Jul 7 09:23:26.607380 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 09:23:27.077433 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 09:23:27.077917 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:23:27.079496 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 09:23:27.083294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:27.103641 sudo[1844]: pam_unix(sudo:session): session closed for user root Jul 7 09:23:27.248140 sshd[1843]: Connection closed by 139.178.89.65 port 41332 Jul 7 09:23:27.248334 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Jul 7 09:23:27.255945 systemd[1]: sshd@6-10.243.72.42:22-139.178.89.65:41332.service: Deactivated successfully. Jul 7 09:23:27.256501 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Jul 7 09:23:27.259909 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 09:23:27.264436 systemd-logind[1566]: Removed session 9. Jul 7 09:23:27.402721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:27.408128 systemd[1]: Started sshd@7-10.243.72.42:22-139.178.89.65:41334.service - OpenSSH per-connection server daemon (139.178.89.65:41334). Jul 7 09:23:27.412613 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:23:27.490984 kubelet[1857]: E0707 09:23:27.490510 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:23:27.494267 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:23:27.494543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:23:27.495447 systemd[1]: kubelet.service: Consumed 364ms CPU time, 110.5M memory peak. Jul 7 09:23:28.311259 sshd[1859]: Accepted publickey for core from 139.178.89.65 port 41334 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:28.313307 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:28.321520 systemd-logind[1566]: New session 10 of user core. Jul 7 09:23:28.327321 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 09:23:28.790581 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 09:23:28.791804 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:23:28.798586 sudo[1869]: pam_unix(sudo:session): session closed for user root Jul 7 09:23:28.806221 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 09:23:28.806622 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:23:28.820718 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 09:23:28.881401 augenrules[1891]: No rules Jul 7 09:23:28.883013 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 09:23:28.883595 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 09:23:28.885366 sudo[1868]: pam_unix(sudo:session): session closed for user root Jul 7 09:23:29.029048 sshd[1867]: Connection closed by 139.178.89.65 port 41334 Jul 7 09:23:29.029940 sshd-session[1859]: pam_unix(sshd:session): session closed for user core Jul 7 09:23:29.034602 systemd[1]: sshd@7-10.243.72.42:22-139.178.89.65:41334.service: Deactivated successfully. Jul 7 09:23:29.036945 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 09:23:29.038978 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Jul 7 09:23:29.041619 systemd-logind[1566]: Removed session 10. Jul 7 09:23:29.189870 systemd[1]: Started sshd@8-10.243.72.42:22-139.178.89.65:41346.service - OpenSSH per-connection server daemon (139.178.89.65:41346). Jul 7 09:23:30.117423 sshd[1900]: Accepted publickey for core from 139.178.89.65 port 41346 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:23:30.119295 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:23:30.127162 systemd-logind[1566]: New session 11 of user core. Jul 7 09:23:30.134323 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 09:23:30.597260 sudo[1903]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 09:23:30.597729 sudo[1903]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 09:23:31.247620 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 09:23:31.263157 (dockerd)[1920]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 09:23:31.867174 dockerd[1920]: time="2025-07-07T09:23:31.866060687Z" level=info msg="Starting up" Jul 7 09:23:31.871958 dockerd[1920]: time="2025-07-07T09:23:31.871894146Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 09:23:31.935226 systemd[1]: var-lib-docker-metacopy\x2dcheck1847666066-merged.mount: Deactivated successfully. Jul 7 09:23:31.964052 dockerd[1920]: time="2025-07-07T09:23:31.963583361Z" level=info msg="Loading containers: start." Jul 7 09:23:31.996123 kernel: Initializing XFRM netlink socket Jul 7 09:23:32.356051 systemd-networkd[1516]: docker0: Link UP Jul 7 09:23:32.362405 dockerd[1920]: time="2025-07-07T09:23:32.362172600Z" level=info msg="Loading containers: done." Jul 7 09:23:32.392256 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck79201368-merged.mount: Deactivated successfully. Jul 7 09:23:32.396908 dockerd[1920]: time="2025-07-07T09:23:32.396801880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 09:23:32.397043 dockerd[1920]: time="2025-07-07T09:23:32.396990918Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 09:23:32.397475 dockerd[1920]: time="2025-07-07T09:23:32.397404469Z" level=info msg="Initializing buildkit" Jul 7 09:23:32.426668 dockerd[1920]: time="2025-07-07T09:23:32.426611517Z" level=info msg="Completed buildkit initialization" Jul 7 09:23:32.437251 dockerd[1920]: time="2025-07-07T09:23:32.437163003Z" level=info msg="Daemon has completed initialization" Jul 7 09:23:32.437982 dockerd[1920]: time="2025-07-07T09:23:32.437442277Z" level=info msg="API listen on /run/docker.sock" Jul 7 09:23:32.437576 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 09:23:33.315371 containerd[1593]: time="2025-07-07T09:23:33.314981855Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 7 09:23:34.426347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852487097.mount: Deactivated successfully. Jul 7 09:23:34.888215 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 09:23:36.831707 containerd[1593]: time="2025-07-07T09:23:36.831577278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:36.833871 containerd[1593]: time="2025-07-07T09:23:36.833813338Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" Jul 7 09:23:36.835410 containerd[1593]: time="2025-07-07T09:23:36.835329955Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:36.842267 containerd[1593]: time="2025-07-07T09:23:36.839552570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:36.842267 containerd[1593]: time="2025-07-07T09:23:36.841052856Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 3.524812449s" Jul 7 09:23:36.842267 containerd[1593]: time="2025-07-07T09:23:36.841200554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 7 09:23:36.842944 containerd[1593]: time="2025-07-07T09:23:36.842889991Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 7 09:23:37.592283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 09:23:37.596798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:37.864744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:37.879593 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:23:37.945713 kubelet[2192]: E0707 09:23:37.945586 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:23:37.948685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:23:37.948935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:23:37.949684 systemd[1]: kubelet.service: Consumed 249ms CPU time, 108.5M memory peak. Jul 7 09:23:39.588947 containerd[1593]: time="2025-07-07T09:23:39.588812139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:39.591741 containerd[1593]: time="2025-07-07T09:23:39.591686052Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" Jul 7 09:23:39.593232 containerd[1593]: time="2025-07-07T09:23:39.592091940Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:39.595466 containerd[1593]: time="2025-07-07T09:23:39.595422730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:39.597528 containerd[1593]: time="2025-07-07T09:23:39.596903006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.753959979s" Jul 7 09:23:39.597528 containerd[1593]: time="2025-07-07T09:23:39.596987445Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 7 09:23:39.598388 containerd[1593]: time="2025-07-07T09:23:39.598344772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 7 09:23:41.729134 containerd[1593]: time="2025-07-07T09:23:41.728871619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:41.731526 containerd[1593]: time="2025-07-07T09:23:41.731488291Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" Jul 7 09:23:41.731937 containerd[1593]: time="2025-07-07T09:23:41.731879790Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:41.744670 containerd[1593]: time="2025-07-07T09:23:41.743897895Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.145510952s" Jul 7 09:23:41.744670 containerd[1593]: time="2025-07-07T09:23:41.743971832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 7 09:23:41.744670 containerd[1593]: time="2025-07-07T09:23:41.744161549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:41.747709 containerd[1593]: time="2025-07-07T09:23:41.745734473Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 7 09:23:44.107546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350280959.mount: Deactivated successfully. Jul 7 09:23:44.998848 containerd[1593]: time="2025-07-07T09:23:44.998765857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:45.000397 containerd[1593]: time="2025-07-07T09:23:45.000336062Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" Jul 7 09:23:45.001825 containerd[1593]: time="2025-07-07T09:23:45.001746260Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:45.004453 containerd[1593]: time="2025-07-07T09:23:45.004341953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:45.005565 containerd[1593]: time="2025-07-07T09:23:45.005044952Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.259258543s" Jul 7 09:23:45.005565 containerd[1593]: time="2025-07-07T09:23:45.005093398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 7 09:23:45.006032 containerd[1593]: time="2025-07-07T09:23:45.005860636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 09:23:46.067863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968454686.mount: Deactivated successfully. Jul 7 09:23:47.602006 update_engine[1568]: I20250707 09:23:47.601770 1568 update_attempter.cc:509] Updating boot flags... Jul 7 09:23:47.958059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 09:23:47.969915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:48.094630 containerd[1593]: time="2025-07-07T09:23:48.094503168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:48.096196 containerd[1593]: time="2025-07-07T09:23:48.096136473Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 09:23:48.106621 containerd[1593]: time="2025-07-07T09:23:48.100549731Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:48.113122 containerd[1593]: time="2025-07-07T09:23:48.112504453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:48.115128 containerd[1593]: time="2025-07-07T09:23:48.115082838Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.109167443s" Jul 7 09:23:48.115329 containerd[1593]: time="2025-07-07T09:23:48.115297099Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 09:23:48.117198 containerd[1593]: time="2025-07-07T09:23:48.116904818Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 09:23:48.340425 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:48.348867 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 09:23:48.442534 kubelet[2291]: E0707 09:23:48.442422 2291 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 09:23:48.445916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 09:23:48.446248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 09:23:48.447107 systemd[1]: kubelet.service: Consumed 359ms CPU time, 110.4M memory peak. Jul 7 09:23:49.124992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount565397367.mount: Deactivated successfully. Jul 7 09:23:49.142502 containerd[1593]: time="2025-07-07T09:23:49.142369443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 09:23:49.143955 containerd[1593]: time="2025-07-07T09:23:49.143903046Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 09:23:49.145394 containerd[1593]: time="2025-07-07T09:23:49.145327410Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 09:23:49.149338 containerd[1593]: time="2025-07-07T09:23:49.149231863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 09:23:49.150735 containerd[1593]: time="2025-07-07T09:23:49.150128557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.032642325s" Jul 7 09:23:49.150735 containerd[1593]: time="2025-07-07T09:23:49.150174171Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 09:23:49.151620 containerd[1593]: time="2025-07-07T09:23:49.151551773Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 09:23:50.641632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485779798.mount: Deactivated successfully. Jul 7 09:23:55.193746 containerd[1593]: time="2025-07-07T09:23:55.193680852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:55.195170 containerd[1593]: time="2025-07-07T09:23:55.195120244Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 7 09:23:55.196074 containerd[1593]: time="2025-07-07T09:23:55.195981910Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:55.200135 containerd[1593]: time="2025-07-07T09:23:55.199703693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:23:55.201266 containerd[1593]: time="2025-07-07T09:23:55.201061982Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 6.049453994s" Jul 7 09:23:55.201266 containerd[1593]: time="2025-07-07T09:23:55.201120620Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 09:23:58.591972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 7 09:23:58.596300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:58.766156 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 09:23:58.766335 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 09:23:58.766894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:58.775323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:58.808642 systemd[1]: Reload requested from client PID 2387 ('systemctl') (unit session-11.scope)... Jul 7 09:23:58.808701 systemd[1]: Reloading... Jul 7 09:23:58.996756 zram_generator::config[2428]: No configuration found. Jul 7 09:23:59.149763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:23:59.325423 systemd[1]: Reloading finished in 516 ms. Jul 7 09:23:59.393778 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 09:23:59.393927 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 09:23:59.394516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:59.394603 systemd[1]: kubelet.service: Consumed 134ms CPU time, 97.2M memory peak. Jul 7 09:23:59.397357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:23:59.554695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:23:59.571968 (kubelet)[2499]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 09:23:59.657355 kubelet[2499]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:23:59.657355 kubelet[2499]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 09:23:59.657355 kubelet[2499]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:23:59.658071 kubelet[2499]: I0707 09:23:59.657466 2499 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 09:24:00.040890 kubelet[2499]: I0707 09:24:00.040813 2499 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 09:24:00.040890 kubelet[2499]: I0707 09:24:00.040866 2499 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 09:24:00.041348 kubelet[2499]: I0707 09:24:00.041311 2499 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 09:24:00.097692 kubelet[2499]: I0707 09:24:00.097482 2499 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 09:24:00.098698 kubelet[2499]: E0707 09:24:00.097988 2499 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.72.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:00.110850 kubelet[2499]: I0707 09:24:00.110788 2499 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 09:24:00.120119 kubelet[2499]: I0707 09:24:00.119770 2499 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 09:24:00.124048 kubelet[2499]: I0707 09:24:00.124022 2499 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 09:24:00.124549 kubelet[2499]: I0707 09:24:00.124482 2499 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 09:24:00.124950 kubelet[2499]: I0707 09:24:00.124633 2499 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-et027.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 09:24:00.125783 kubelet[2499]: I0707 09:24:00.125496 2499 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 09:24:00.125783 kubelet[2499]: I0707 09:24:00.125522 2499 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 09:24:00.126364 kubelet[2499]: I0707 09:24:00.126341 2499 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:24:00.129639 kubelet[2499]: I0707 09:24:00.129612 2499 kubelet.go:408] "Attempting to sync node with API server" Jul 7 09:24:00.129794 kubelet[2499]: I0707 09:24:00.129773 2499 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 09:24:00.131168 kubelet[2499]: I0707 09:24:00.131146 2499 kubelet.go:314] "Adding apiserver pod source" Jul 7 09:24:00.131330 kubelet[2499]: I0707 09:24:00.131310 2499 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 09:24:00.134576 kubelet[2499]: W0707 09:24:00.134456 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.72.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-et027.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:00.135030 kubelet[2499]: E0707 09:24:00.134611 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.72.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-et027.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:00.138887 kubelet[2499]: W0707 09:24:00.138815 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.72.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:00.140743 kubelet[2499]: E0707 09:24:00.138892 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.72.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:00.140743 kubelet[2499]: I0707 09:24:00.139067 2499 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 09:24:00.142583 kubelet[2499]: I0707 09:24:00.142546 2499 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 09:24:00.143853 kubelet[2499]: W0707 09:24:00.143488 2499 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 09:24:00.145405 kubelet[2499]: I0707 09:24:00.145377 2499 server.go:1274] "Started kubelet" Jul 7 09:24:00.148524 kubelet[2499]: I0707 09:24:00.148500 2499 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 09:24:00.155699 kubelet[2499]: E0707 09:24:00.151250 2499 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.72.42:6443/api/v1/namespaces/default/events\": dial tcp 10.243.72.42:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-et027.gb1.brightbox.com.184fedcbeae35203 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-et027.gb1.brightbox.com,UID:srv-et027.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-et027.gb1.brightbox.com,},FirstTimestamp:2025-07-07 09:24:00.145322499 +0000 UTC m=+0.566054093,LastTimestamp:2025-07-07 09:24:00.145322499 +0000 UTC m=+0.566054093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-et027.gb1.brightbox.com,}" Jul 7 09:24:00.166226 kubelet[2499]: I0707 09:24:00.165327 2499 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 09:24:00.166226 kubelet[2499]: I0707 09:24:00.165455 2499 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 09:24:00.166226 kubelet[2499]: E0707 09:24:00.165770 2499 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-et027.gb1.brightbox.com\" not found" Jul 7 09:24:00.166481 kubelet[2499]: I0707 09:24:00.166363 2499 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 09:24:00.166548 kubelet[2499]: I0707 09:24:00.166483 2499 reconciler.go:26] "Reconciler: start to sync state" Jul 7 09:24:00.172273 kubelet[2499]: I0707 09:24:00.171549 2499 factory.go:221] Registration of the systemd container factory successfully Jul 7 09:24:00.172273 kubelet[2499]: I0707 09:24:00.171759 2499 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 09:24:00.172514 kubelet[2499]: W0707 09:24:00.172439 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.72.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:00.172583 kubelet[2499]: E0707 09:24:00.172552 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.72.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:00.172726 kubelet[2499]: E0707 09:24:00.172669 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-et027.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.42:6443: connect: connection refused" interval="200ms" Jul 7 09:24:00.175880 kubelet[2499]: I0707 09:24:00.175767 2499 server.go:449] "Adding debug handlers to kubelet server" Jul 7 09:24:00.177713 kubelet[2499]: I0707 09:24:00.177616 2499 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 09:24:00.178077 kubelet[2499]: I0707 09:24:00.178048 2499 factory.go:221] Registration of the containerd container factory successfully Jul 7 09:24:00.179054 kubelet[2499]: I0707 09:24:00.179003 2499 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 09:24:00.194753 kubelet[2499]: E0707 09:24:00.194700 2499 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 09:24:00.197019 kubelet[2499]: I0707 09:24:00.196612 2499 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 09:24:00.228154 kubelet[2499]: I0707 09:24:00.228022 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 09:24:00.240316 kubelet[2499]: I0707 09:24:00.240242 2499 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 09:24:00.243015 kubelet[2499]: I0707 09:24:00.242292 2499 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 09:24:00.243015 kubelet[2499]: I0707 09:24:00.242524 2499 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 09:24:00.243015 kubelet[2499]: I0707 09:24:00.242584 2499 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:24:00.243015 kubelet[2499]: I0707 09:24:00.242595 2499 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 09:24:00.243015 kubelet[2499]: I0707 09:24:00.242676 2499 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 09:24:00.243015 kubelet[2499]: E0707 09:24:00.242822 2499 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 09:24:00.249437 kubelet[2499]: W0707 09:24:00.249377 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.72.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:00.249847 kubelet[2499]: E0707 09:24:00.249813 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.72.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:00.266143 kubelet[2499]: E0707 09:24:00.266031 2499 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-et027.gb1.brightbox.com\" not found" Jul 7 09:24:00.286017 kubelet[2499]: I0707 09:24:00.285921 2499 policy_none.go:49] "None policy: Start" Jul 7 09:24:00.287653 kubelet[2499]: I0707 09:24:00.287082 2499 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 09:24:00.287653 kubelet[2499]: I0707 09:24:00.287143 2499 state_mem.go:35] "Initializing new in-memory state store" Jul 7 09:24:00.299953 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 09:24:00.324998 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 09:24:00.331323 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 09:24:00.343224 kubelet[2499]: E0707 09:24:00.343166 2499 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 09:24:00.347742 kubelet[2499]: I0707 09:24:00.347708 2499 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 09:24:00.348067 kubelet[2499]: I0707 09:24:00.348046 2499 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 09:24:00.348192 kubelet[2499]: I0707 09:24:00.348085 2499 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 09:24:00.349456 kubelet[2499]: I0707 09:24:00.348740 2499 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 09:24:00.353231 kubelet[2499]: E0707 09:24:00.353201 2499 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-et027.gb1.brightbox.com\" not found" Jul 7 09:24:00.374418 kubelet[2499]: E0707 09:24:00.374331 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-et027.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.42:6443: connect: connection refused" interval="400ms" Jul 7 09:24:00.451131 kubelet[2499]: I0707 09:24:00.451051 2499 kubelet_node_status.go:72] "Attempting to register node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:00.452045 kubelet[2499]: E0707 09:24:00.452005 2499 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.72.42:6443/api/v1/nodes\": dial tcp 10.243.72.42:6443: connect: connection refused" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:00.560423 systemd[1]: Created slice kubepods-burstable-podcbaee94cc45e5d85aa3db2b7f3dadaf8.slice - libcontainer container kubepods-burstable-podcbaee94cc45e5d85aa3db2b7f3dadaf8.slice. Jul 7 09:24:00.568855 kubelet[2499]: I0707 09:24:00.568799 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-ca-certs\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.568855 kubelet[2499]: I0707 09:24:00.568850 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-flexvolume-dir\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.569051 kubelet[2499]: I0707 09:24:00.568893 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.569051 kubelet[2499]: I0707 09:24:00.568923 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3af9306e706f2eff209f5f7517a23cd-kubeconfig\") pod \"kube-scheduler-srv-et027.gb1.brightbox.com\" (UID: \"e3af9306e706f2eff209f5f7517a23cd\") " pod="kube-system/kube-scheduler-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.569051 kubelet[2499]: I0707 09:24:00.568965 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbaee94cc45e5d85aa3db2b7f3dadaf8-ca-certs\") pod \"kube-apiserver-srv-et027.gb1.brightbox.com\" (UID: \"cbaee94cc45e5d85aa3db2b7f3dadaf8\") " pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.569051 kubelet[2499]: I0707 09:24:00.568993 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbaee94cc45e5d85aa3db2b7f3dadaf8-usr-share-ca-certificates\") pod \"kube-apiserver-srv-et027.gb1.brightbox.com\" (UID: \"cbaee94cc45e5d85aa3db2b7f3dadaf8\") " pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.569051 kubelet[2499]: I0707 09:24:00.569024 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-kubeconfig\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.569365 kubelet[2499]: I0707 09:24:00.569050 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbaee94cc45e5d85aa3db2b7f3dadaf8-k8s-certs\") pod \"kube-apiserver-srv-et027.gb1.brightbox.com\" (UID: \"cbaee94cc45e5d85aa3db2b7f3dadaf8\") " pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.569365 kubelet[2499]: I0707 09:24:00.569079 2499 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-k8s-certs\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:00.575329 systemd[1]: Created slice kubepods-burstable-pod54943cd7f88808cf7bfa107f3fdfafe7.slice - libcontainer container kubepods-burstable-pod54943cd7f88808cf7bfa107f3fdfafe7.slice. Jul 7 09:24:00.590740 systemd[1]: Created slice kubepods-burstable-pode3af9306e706f2eff209f5f7517a23cd.slice - libcontainer container kubepods-burstable-pode3af9306e706f2eff209f5f7517a23cd.slice. Jul 7 09:24:00.654806 kubelet[2499]: I0707 09:24:00.654758 2499 kubelet_node_status.go:72] "Attempting to register node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:00.655270 kubelet[2499]: E0707 09:24:00.655233 2499 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.72.42:6443/api/v1/nodes\": dial tcp 10.243.72.42:6443: connect: connection refused" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:00.775637 kubelet[2499]: E0707 09:24:00.775553 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-et027.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.42:6443: connect: connection refused" interval="800ms" Jul 7 09:24:00.873900 containerd[1593]: time="2025-07-07T09:24:00.873324686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-et027.gb1.brightbox.com,Uid:cbaee94cc45e5d85aa3db2b7f3dadaf8,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:00.897755 containerd[1593]: time="2025-07-07T09:24:00.897445839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-et027.gb1.brightbox.com,Uid:54943cd7f88808cf7bfa107f3fdfafe7,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:00.897755 containerd[1593]: time="2025-07-07T09:24:00.897747558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-et027.gb1.brightbox.com,Uid:e3af9306e706f2eff209f5f7517a23cd,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:00.980156 kubelet[2499]: W0707 09:24:00.976785 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.72.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-et027.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:00.980156 kubelet[2499]: E0707 09:24:00.976897 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.72.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-et027.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:01.063125 kubelet[2499]: I0707 09:24:01.061492 2499 kubelet_node_status.go:72] "Attempting to register node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:01.063125 kubelet[2499]: E0707 09:24:01.062026 2499 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.72.42:6443/api/v1/nodes\": dial tcp 10.243.72.42:6443: connect: connection refused" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:01.122454 containerd[1593]: time="2025-07-07T09:24:01.122026074Z" level=info msg="connecting to shim 3656b9ec467898be1770ac585ea042a9e31f09268dd5106ce45bdb01fdf8dbbe" address="unix:///run/containerd/s/cbae856e543fbffff6cf027c3db5d43671c56b57571465b82ac0ff96e757549e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:01.123482 containerd[1593]: time="2025-07-07T09:24:01.123417925Z" level=info msg="connecting to shim 5cff8542311b0669b722ee8e853da2ad86973d1e6afcf3f23b29a92f7338ede9" address="unix:///run/containerd/s/761cd67ce2ce273824b60d75e48748599ea83348280b53a10e6b6b78f6e93a8a" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:01.128287 containerd[1593]: time="2025-07-07T09:24:01.127685589Z" level=info msg="connecting to shim 84bf147133485a87dd310f518471ea8f52e99739347cf6525807292907334835" address="unix:///run/containerd/s/ddbfa1aa5cbd679fae50657cd1c44549bec86fa7ab8139d3077b49b1295d1976" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:01.255328 systemd[1]: Started cri-containerd-3656b9ec467898be1770ac585ea042a9e31f09268dd5106ce45bdb01fdf8dbbe.scope - libcontainer container 3656b9ec467898be1770ac585ea042a9e31f09268dd5106ce45bdb01fdf8dbbe. Jul 7 09:24:01.271355 systemd[1]: Started cri-containerd-5cff8542311b0669b722ee8e853da2ad86973d1e6afcf3f23b29a92f7338ede9.scope - libcontainer container 5cff8542311b0669b722ee8e853da2ad86973d1e6afcf3f23b29a92f7338ede9. Jul 7 09:24:01.283516 systemd[1]: Started cri-containerd-84bf147133485a87dd310f518471ea8f52e99739347cf6525807292907334835.scope - libcontainer container 84bf147133485a87dd310f518471ea8f52e99739347cf6525807292907334835. Jul 7 09:24:01.388219 containerd[1593]: time="2025-07-07T09:24:01.387708700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-et027.gb1.brightbox.com,Uid:e3af9306e706f2eff209f5f7517a23cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3656b9ec467898be1770ac585ea042a9e31f09268dd5106ce45bdb01fdf8dbbe\"" Jul 7 09:24:01.395678 containerd[1593]: time="2025-07-07T09:24:01.395637751Z" level=info msg="CreateContainer within sandbox \"3656b9ec467898be1770ac585ea042a9e31f09268dd5106ce45bdb01fdf8dbbe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 09:24:01.415420 kubelet[2499]: W0707 09:24:01.415298 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.72.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:01.416537 kubelet[2499]: E0707 09:24:01.415655 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.72.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:01.423092 containerd[1593]: time="2025-07-07T09:24:01.423032657Z" level=info msg="Container 1c990664c6eb5801907f627c0a22ef1e5b00a396c4a6b207514c86e044f73ef4: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:01.423763 containerd[1593]: time="2025-07-07T09:24:01.423726019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-et027.gb1.brightbox.com,Uid:cbaee94cc45e5d85aa3db2b7f3dadaf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"84bf147133485a87dd310f518471ea8f52e99739347cf6525807292907334835\"" Jul 7 09:24:01.426691 containerd[1593]: time="2025-07-07T09:24:01.426658916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-et027.gb1.brightbox.com,Uid:54943cd7f88808cf7bfa107f3fdfafe7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5cff8542311b0669b722ee8e853da2ad86973d1e6afcf3f23b29a92f7338ede9\"" Jul 7 09:24:01.430589 containerd[1593]: time="2025-07-07T09:24:01.430556930Z" level=info msg="CreateContainer within sandbox \"84bf147133485a87dd310f518471ea8f52e99739347cf6525807292907334835\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 09:24:01.440860 containerd[1593]: time="2025-07-07T09:24:01.440811150Z" level=info msg="CreateContainer within sandbox \"3656b9ec467898be1770ac585ea042a9e31f09268dd5106ce45bdb01fdf8dbbe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c990664c6eb5801907f627c0a22ef1e5b00a396c4a6b207514c86e044f73ef4\"" Jul 7 09:24:01.442620 containerd[1593]: time="2025-07-07T09:24:01.442588801Z" level=info msg="StartContainer for \"1c990664c6eb5801907f627c0a22ef1e5b00a396c4a6b207514c86e044f73ef4\"" Jul 7 09:24:01.445367 containerd[1593]: time="2025-07-07T09:24:01.445144219Z" level=info msg="connecting to shim 1c990664c6eb5801907f627c0a22ef1e5b00a396c4a6b207514c86e044f73ef4" address="unix:///run/containerd/s/cbae856e543fbffff6cf027c3db5d43671c56b57571465b82ac0ff96e757549e" protocol=ttrpc version=3 Jul 7 09:24:01.446037 containerd[1593]: time="2025-07-07T09:24:01.446007132Z" level=info msg="Container 08d6645efc04d1c46ff7970dcd988b761a4e7c1cfc3bab9c4a85fe3bc18cbd67: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:01.448430 containerd[1593]: time="2025-07-07T09:24:01.448254481Z" level=info msg="CreateContainer within sandbox \"5cff8542311b0669b722ee8e853da2ad86973d1e6afcf3f23b29a92f7338ede9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 09:24:01.468156 containerd[1593]: time="2025-07-07T09:24:01.468086950Z" level=info msg="Container d8e371a6678bee457ca937a1e8f89a16f5d84b709dd0f550e391ed98d32ec047: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:01.471569 containerd[1593]: time="2025-07-07T09:24:01.471493967Z" level=info msg="CreateContainer within sandbox \"84bf147133485a87dd310f518471ea8f52e99739347cf6525807292907334835\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08d6645efc04d1c46ff7970dcd988b761a4e7c1cfc3bab9c4a85fe3bc18cbd67\"" Jul 7 09:24:01.472528 containerd[1593]: time="2025-07-07T09:24:01.472494948Z" level=info msg="StartContainer for \"08d6645efc04d1c46ff7970dcd988b761a4e7c1cfc3bab9c4a85fe3bc18cbd67\"" Jul 7 09:24:01.474155 containerd[1593]: time="2025-07-07T09:24:01.473959728Z" level=info msg="connecting to shim 08d6645efc04d1c46ff7970dcd988b761a4e7c1cfc3bab9c4a85fe3bc18cbd67" address="unix:///run/containerd/s/ddbfa1aa5cbd679fae50657cd1c44549bec86fa7ab8139d3077b49b1295d1976" protocol=ttrpc version=3 Jul 7 09:24:01.475388 systemd[1]: Started cri-containerd-1c990664c6eb5801907f627c0a22ef1e5b00a396c4a6b207514c86e044f73ef4.scope - libcontainer container 1c990664c6eb5801907f627c0a22ef1e5b00a396c4a6b207514c86e044f73ef4. Jul 7 09:24:01.486204 containerd[1593]: time="2025-07-07T09:24:01.485964082Z" level=info msg="CreateContainer within sandbox \"5cff8542311b0669b722ee8e853da2ad86973d1e6afcf3f23b29a92f7338ede9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8e371a6678bee457ca937a1e8f89a16f5d84b709dd0f550e391ed98d32ec047\"" Jul 7 09:24:01.487491 containerd[1593]: time="2025-07-07T09:24:01.487092637Z" level=info msg="StartContainer for \"d8e371a6678bee457ca937a1e8f89a16f5d84b709dd0f550e391ed98d32ec047\"" Jul 7 09:24:01.490119 containerd[1593]: time="2025-07-07T09:24:01.490073242Z" level=info msg="connecting to shim d8e371a6678bee457ca937a1e8f89a16f5d84b709dd0f550e391ed98d32ec047" address="unix:///run/containerd/s/761cd67ce2ce273824b60d75e48748599ea83348280b53a10e6b6b78f6e93a8a" protocol=ttrpc version=3 Jul 7 09:24:01.516423 systemd[1]: Started cri-containerd-08d6645efc04d1c46ff7970dcd988b761a4e7c1cfc3bab9c4a85fe3bc18cbd67.scope - libcontainer container 08d6645efc04d1c46ff7970dcd988b761a4e7c1cfc3bab9c4a85fe3bc18cbd67. Jul 7 09:24:01.548385 systemd[1]: Started cri-containerd-d8e371a6678bee457ca937a1e8f89a16f5d84b709dd0f550e391ed98d32ec047.scope - libcontainer container d8e371a6678bee457ca937a1e8f89a16f5d84b709dd0f550e391ed98d32ec047. Jul 7 09:24:01.578149 kubelet[2499]: E0707 09:24:01.577983 2499 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.72.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-et027.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.72.42:6443: connect: connection refused" interval="1.6s" Jul 7 09:24:01.606399 containerd[1593]: time="2025-07-07T09:24:01.606337540Z" level=info msg="StartContainer for \"1c990664c6eb5801907f627c0a22ef1e5b00a396c4a6b207514c86e044f73ef4\" returns successfully" Jul 7 09:24:01.643498 kubelet[2499]: W0707 09:24:01.643308 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.72.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:01.643498 kubelet[2499]: E0707 09:24:01.643415 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.72.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:01.670159 containerd[1593]: time="2025-07-07T09:24:01.670083992Z" level=info msg="StartContainer for \"08d6645efc04d1c46ff7970dcd988b761a4e7c1cfc3bab9c4a85fe3bc18cbd67\" returns successfully" Jul 7 09:24:01.691638 containerd[1593]: time="2025-07-07T09:24:01.691520003Z" level=info msg="StartContainer for \"d8e371a6678bee457ca937a1e8f89a16f5d84b709dd0f550e391ed98d32ec047\" returns successfully" Jul 7 09:24:01.775875 kubelet[2499]: W0707 09:24:01.775789 2499 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.72.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.72.42:6443: connect: connection refused Jul 7 09:24:01.775875 kubelet[2499]: E0707 09:24:01.775879 2499 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.72.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.72.42:6443: connect: connection refused" logger="UnhandledError" Jul 7 09:24:01.865538 kubelet[2499]: I0707 09:24:01.865490 2499 kubelet_node_status.go:72] "Attempting to register node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:01.867304 kubelet[2499]: E0707 09:24:01.867262 2499 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.243.72.42:6443/api/v1/nodes\": dial tcp 10.243.72.42:6443: connect: connection refused" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:03.472401 kubelet[2499]: I0707 09:24:03.472330 2499 kubelet_node_status.go:72] "Attempting to register node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:04.137223 kubelet[2499]: I0707 09:24:04.137168 2499 apiserver.go:52] "Watching apiserver" Jul 7 09:24:04.225964 kubelet[2499]: E0707 09:24:04.225909 2499 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-et027.gb1.brightbox.com\" not found" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:04.268138 kubelet[2499]: I0707 09:24:04.267292 2499 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 09:24:04.308129 kubelet[2499]: I0707 09:24:04.306730 2499 kubelet_node_status.go:75] "Successfully registered node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:04.844213 kubelet[2499]: E0707 09:24:04.843853 2499 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-et027.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" Jul 7 09:24:06.630981 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-11.scope)... Jul 7 09:24:06.631614 systemd[1]: Reloading... Jul 7 09:24:06.791212 zram_generator::config[2816]: No configuration found. Jul 7 09:24:06.976135 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 09:24:07.174032 systemd[1]: Reloading finished in 541 ms. Jul 7 09:24:07.210617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:24:07.228134 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 09:24:07.228761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:24:07.228897 systemd[1]: kubelet.service: Consumed 1.119s CPU time, 126.9M memory peak. Jul 7 09:24:07.233452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 09:24:07.565314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 09:24:07.579796 (kubelet)[2879]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 09:24:07.665384 kubelet[2879]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:24:07.665384 kubelet[2879]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 09:24:07.665384 kubelet[2879]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 09:24:07.666252 kubelet[2879]: I0707 09:24:07.665439 2879 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 09:24:07.675644 kubelet[2879]: I0707 09:24:07.675562 2879 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 09:24:07.675644 kubelet[2879]: I0707 09:24:07.675618 2879 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 09:24:07.675983 kubelet[2879]: I0707 09:24:07.675959 2879 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 09:24:07.678064 kubelet[2879]: I0707 09:24:07.678025 2879 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 09:24:07.685659 kubelet[2879]: I0707 09:24:07.685174 2879 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 09:24:07.694281 kubelet[2879]: I0707 09:24:07.693134 2879 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 09:24:07.702664 kubelet[2879]: I0707 09:24:07.702616 2879 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 09:24:07.703155 kubelet[2879]: I0707 09:24:07.703134 2879 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 09:24:07.703516 kubelet[2879]: I0707 09:24:07.703475 2879 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 09:24:07.703883 kubelet[2879]: I0707 09:24:07.703609 2879 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-et027.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 09:24:07.704148 kubelet[2879]: I0707 09:24:07.704127 2879 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 09:24:07.704280 kubelet[2879]: I0707 09:24:07.704263 2879 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 09:24:07.704440 kubelet[2879]: I0707 09:24:07.704420 2879 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:24:07.707994 kubelet[2879]: I0707 09:24:07.707921 2879 kubelet.go:408] "Attempting to sync node with API server" Jul 7 09:24:07.709237 kubelet[2879]: I0707 09:24:07.708187 2879 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 09:24:07.709237 kubelet[2879]: I0707 09:24:07.708246 2879 kubelet.go:314] "Adding apiserver pod source" Jul 7 09:24:07.709237 kubelet[2879]: I0707 09:24:07.708265 2879 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 09:24:07.714072 kubelet[2879]: I0707 09:24:07.714017 2879 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 09:24:07.715715 kubelet[2879]: I0707 09:24:07.715690 2879 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 09:24:07.718747 kubelet[2879]: I0707 09:24:07.718716 2879 server.go:1274] "Started kubelet" Jul 7 09:24:07.721529 kubelet[2879]: I0707 09:24:07.721460 2879 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 09:24:07.722081 kubelet[2879]: I0707 09:24:07.722042 2879 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 09:24:07.722623 kubelet[2879]: I0707 09:24:07.722601 2879 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 09:24:07.726293 kubelet[2879]: I0707 09:24:07.726258 2879 server.go:449] "Adding debug handlers to kubelet server" Jul 7 09:24:07.728801 kubelet[2879]: I0707 09:24:07.728773 2879 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 09:24:07.744641 kubelet[2879]: I0707 09:24:07.743861 2879 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 09:24:07.748734 kubelet[2879]: I0707 09:24:07.747778 2879 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 09:24:07.749800 kubelet[2879]: E0707 09:24:07.749434 2879 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-et027.gb1.brightbox.com\" not found" Jul 7 09:24:07.754223 kubelet[2879]: I0707 09:24:07.754185 2879 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 09:24:07.754724 kubelet[2879]: I0707 09:24:07.754705 2879 reconciler.go:26] "Reconciler: start to sync state" Jul 7 09:24:07.779460 kubelet[2879]: E0707 09:24:07.779384 2879 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 09:24:07.797482 kubelet[2879]: I0707 09:24:07.797305 2879 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 09:24:07.817303 kubelet[2879]: I0707 09:24:07.816680 2879 factory.go:221] Registration of the containerd container factory successfully Jul 7 09:24:07.817303 kubelet[2879]: I0707 09:24:07.816713 2879 factory.go:221] Registration of the systemd container factory successfully Jul 7 09:24:07.821215 kubelet[2879]: I0707 09:24:07.821172 2879 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 09:24:07.823462 kubelet[2879]: I0707 09:24:07.823435 2879 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 09:24:07.824061 kubelet[2879]: I0707 09:24:07.823590 2879 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 09:24:07.824061 kubelet[2879]: I0707 09:24:07.823640 2879 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 09:24:07.824061 kubelet[2879]: E0707 09:24:07.823726 2879 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 09:24:07.863037 sudo[2907]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 09:24:07.863628 sudo[2907]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 09:24:07.924196 kubelet[2879]: E0707 09:24:07.923960 2879 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 09:24:07.958143 kubelet[2879]: I0707 09:24:07.957642 2879 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 09:24:07.958143 kubelet[2879]: I0707 09:24:07.957675 2879 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 09:24:07.958143 kubelet[2879]: I0707 09:24:07.957704 2879 state_mem.go:36] "Initialized new in-memory state store" Jul 7 09:24:07.958143 kubelet[2879]: I0707 09:24:07.957965 2879 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 09:24:07.958143 kubelet[2879]: I0707 09:24:07.957984 2879 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 09:24:07.958143 kubelet[2879]: I0707 09:24:07.958016 2879 policy_none.go:49] "None policy: Start" Jul 7 09:24:07.962193 kubelet[2879]: I0707 09:24:07.960419 2879 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 09:24:07.962193 kubelet[2879]: I0707 09:24:07.960450 2879 state_mem.go:35] "Initializing new in-memory state store" Jul 7 09:24:07.962193 kubelet[2879]: I0707 09:24:07.960736 2879 state_mem.go:75] "Updated machine memory state" Jul 7 09:24:07.971690 kubelet[2879]: I0707 09:24:07.971654 2879 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 09:24:07.972274 kubelet[2879]: I0707 09:24:07.972201 2879 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 09:24:07.972439 kubelet[2879]: I0707 09:24:07.972379 2879 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 09:24:07.973978 kubelet[2879]: I0707 09:24:07.973957 2879 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 09:24:08.107431 kubelet[2879]: I0707 09:24:08.107198 2879 kubelet_node_status.go:72] "Attempting to register node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:08.118971 kubelet[2879]: I0707 09:24:08.118887 2879 kubelet_node_status.go:111] "Node was previously registered" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:08.119626 kubelet[2879]: I0707 09:24:08.119428 2879 kubelet_node_status.go:75] "Successfully registered node" node="srv-et027.gb1.brightbox.com" Jul 7 09:24:08.137424 kubelet[2879]: W0707 09:24:08.136472 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:24:08.143121 kubelet[2879]: W0707 09:24:08.142768 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:24:08.143892 kubelet[2879]: W0707 09:24:08.143438 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:24:08.156378 kubelet[2879]: I0707 09:24:08.156179 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-flexvolume-dir\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.157066 kubelet[2879]: I0707 09:24:08.156942 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-k8s-certs\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.157066 kubelet[2879]: I0707 09:24:08.157006 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-ca-certs\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.257918 kubelet[2879]: I0707 09:24:08.257847 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.258450 kubelet[2879]: I0707 09:24:08.258367 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e3af9306e706f2eff209f5f7517a23cd-kubeconfig\") pod \"kube-scheduler-srv-et027.gb1.brightbox.com\" (UID: \"e3af9306e706f2eff209f5f7517a23cd\") " pod="kube-system/kube-scheduler-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.258858 kubelet[2879]: I0707 09:24:08.258781 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbaee94cc45e5d85aa3db2b7f3dadaf8-ca-certs\") pod \"kube-apiserver-srv-et027.gb1.brightbox.com\" (UID: \"cbaee94cc45e5d85aa3db2b7f3dadaf8\") " pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.259230 kubelet[2879]: I0707 09:24:08.259184 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbaee94cc45e5d85aa3db2b7f3dadaf8-k8s-certs\") pod \"kube-apiserver-srv-et027.gb1.brightbox.com\" (UID: \"cbaee94cc45e5d85aa3db2b7f3dadaf8\") " pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.260087 kubelet[2879]: I0707 09:24:08.259986 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbaee94cc45e5d85aa3db2b7f3dadaf8-usr-share-ca-certificates\") pod \"kube-apiserver-srv-et027.gb1.brightbox.com\" (UID: \"cbaee94cc45e5d85aa3db2b7f3dadaf8\") " pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.260337 kubelet[2879]: I0707 09:24:08.260313 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54943cd7f88808cf7bfa107f3fdfafe7-kubeconfig\") pod \"kube-controller-manager-srv-et027.gb1.brightbox.com\" (UID: \"54943cd7f88808cf7bfa107f3fdfafe7\") " pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" Jul 7 09:24:08.714131 kubelet[2879]: I0707 09:24:08.713453 2879 apiserver.go:52] "Watching apiserver" Jul 7 09:24:08.737419 sudo[2907]: pam_unix(sudo:session): session closed for user root Jul 7 09:24:08.754888 kubelet[2879]: I0707 09:24:08.754777 2879 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 09:24:08.761753 kubelet[2879]: I0707 09:24:08.761637 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-et027.gb1.brightbox.com" podStartSLOduration=0.761549405 podStartE2EDuration="761.549405ms" podCreationTimestamp="2025-07-07 09:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:24:08.760539108 +0000 UTC m=+1.170647514" watchObservedRunningTime="2025-07-07 09:24:08.761549405 +0000 UTC m=+1.171657804" Jul 7 09:24:08.775735 kubelet[2879]: I0707 09:24:08.775502 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-et027.gb1.brightbox.com" podStartSLOduration=0.775474551 podStartE2EDuration="775.474551ms" podCreationTimestamp="2025-07-07 09:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:24:08.775374912 +0000 UTC m=+1.185483318" watchObservedRunningTime="2025-07-07 09:24:08.775474551 +0000 UTC m=+1.185582943" Jul 7 09:24:08.793501 kubelet[2879]: I0707 09:24:08.793340 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-et027.gb1.brightbox.com" podStartSLOduration=0.793310497 podStartE2EDuration="793.310497ms" podCreationTimestamp="2025-07-07 09:24:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:24:08.788491949 +0000 UTC m=+1.198600360" watchObservedRunningTime="2025-07-07 09:24:08.793310497 +0000 UTC m=+1.203418901" Jul 7 09:24:08.964323 kubelet[2879]: W0707 09:24:08.964130 2879 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 09:24:08.964323 kubelet[2879]: E0707 09:24:08.964247 2879 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-et027.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-et027.gb1.brightbox.com" Jul 7 09:24:10.736152 sudo[1903]: pam_unix(sudo:session): session closed for user root Jul 7 09:24:10.882177 sshd[1902]: Connection closed by 139.178.89.65 port 41346 Jul 7 09:24:10.884288 sshd-session[1900]: pam_unix(sshd:session): session closed for user core Jul 7 09:24:10.892731 systemd[1]: sshd@8-10.243.72.42:22-139.178.89.65:41346.service: Deactivated successfully. Jul 7 09:24:10.898453 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 09:24:10.898943 systemd[1]: session-11.scope: Consumed 6.628s CPU time, 216.8M memory peak. Jul 7 09:24:10.903805 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Jul 7 09:24:10.912067 systemd-logind[1566]: Removed session 11. Jul 7 09:24:11.019644 kubelet[2879]: I0707 09:24:11.019473 2879 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 09:24:11.022548 kubelet[2879]: I0707 09:24:11.021870 2879 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 09:24:11.022849 containerd[1593]: time="2025-07-07T09:24:11.021516286Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 09:24:12.052471 systemd[1]: Created slice kubepods-besteffort-pod9c920051_d610_4379_bc09_43addb2b185b.slice - libcontainer container kubepods-besteffort-pod9c920051_d610_4379_bc09_43addb2b185b.slice. Jul 7 09:24:12.057158 kubelet[2879]: W0707 09:24:12.056865 2879 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:srv-et027.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object Jul 7 09:24:12.057158 kubelet[2879]: E0707 09:24:12.056956 2879 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-et027.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 09:24:12.057158 kubelet[2879]: W0707 09:24:12.057072 2879 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:srv-et027.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object Jul 7 09:24:12.057158 kubelet[2879]: E0707 09:24:12.057119 2879 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-et027.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 09:24:12.061413 kubelet[2879]: W0707 09:24:12.061215 2879 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:srv-et027.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object Jul 7 09:24:12.061413 kubelet[2879]: E0707 09:24:12.061276 2879 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-et027.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 09:24:12.061413 kubelet[2879]: W0707 09:24:12.061356 2879 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:srv-et027.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object Jul 7 09:24:12.061413 kubelet[2879]: E0707 09:24:12.061379 2879 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:srv-et027.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 09:24:12.063275 kubelet[2879]: W0707 09:24:12.063233 2879 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-et027.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object Jul 7 09:24:12.063639 kubelet[2879]: E0707 09:24:12.063307 2879 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-et027.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-et027.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 09:24:12.069534 systemd[1]: Created slice kubepods-burstable-pod90fea754_6b34_4c24_aafd_77026c66f4fe.slice - libcontainer container kubepods-burstable-pod90fea754_6b34_4c24_aafd_77026c66f4fe.slice. Jul 7 09:24:12.092553 kubelet[2879]: I0707 09:24:12.092465 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-kernel\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.092553 kubelet[2879]: I0707 09:24:12.092542 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c920051-d610-4379-bc09-43addb2b185b-xtables-lock\") pod \"kube-proxy-p74d6\" (UID: \"9c920051-d610-4379-bc09-43addb2b185b\") " pod="kube-system/kube-proxy-p74d6" Jul 7 09:24:12.092818 kubelet[2879]: I0707 09:24:12.092572 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-cgroup\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.092818 kubelet[2879]: I0707 09:24:12.092598 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c920051-d610-4379-bc09-43addb2b185b-lib-modules\") pod \"kube-proxy-p74d6\" (UID: \"9c920051-d610-4379-bc09-43addb2b185b\") " pod="kube-system/kube-proxy-p74d6" Jul 7 09:24:12.092818 kubelet[2879]: I0707 09:24:12.092624 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6h9b\" (UniqueName: \"kubernetes.io/projected/9c920051-d610-4379-bc09-43addb2b185b-kube-api-access-f6h9b\") pod \"kube-proxy-p74d6\" (UID: \"9c920051-d610-4379-bc09-43addb2b185b\") " pod="kube-system/kube-proxy-p74d6" Jul 7 09:24:12.092818 kubelet[2879]: I0707 09:24:12.092756 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-run\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.092818 kubelet[2879]: I0707 09:24:12.092792 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-etc-cni-netd\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.093046 kubelet[2879]: I0707 09:24:12.092818 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-config-path\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.093046 kubelet[2879]: I0707 09:24:12.092844 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbk9\" (UniqueName: \"kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-kube-api-access-wrbk9\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.093046 kubelet[2879]: I0707 09:24:12.092867 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9c920051-d610-4379-bc09-43addb2b185b-kube-proxy\") pod \"kube-proxy-p74d6\" (UID: \"9c920051-d610-4379-bc09-43addb2b185b\") " pod="kube-system/kube-proxy-p74d6" Jul 7 09:24:12.093046 kubelet[2879]: I0707 09:24:12.092938 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-hostproc\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.093046 kubelet[2879]: I0707 09:24:12.092968 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cni-path\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.093046 kubelet[2879]: I0707 09:24:12.092996 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-xtables-lock\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.102121 kubelet[2879]: I0707 09:24:12.101295 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-hubble-tls\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.102121 kubelet[2879]: I0707 09:24:12.101408 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-bpf-maps\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.102121 kubelet[2879]: I0707 09:24:12.101444 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-lib-modules\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.102121 kubelet[2879]: I0707 09:24:12.101490 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90fea754-6b34-4c24-aafd-77026c66f4fe-clustermesh-secrets\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.102121 kubelet[2879]: I0707 09:24:12.101537 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-net\") pod \"cilium-bffsr\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " pod="kube-system/cilium-bffsr" Jul 7 09:24:12.273199 systemd[1]: Created slice kubepods-besteffort-pod7fca043e_2b76_4ab2_9847_908371bef67c.slice - libcontainer container kubepods-besteffort-pod7fca043e_2b76_4ab2_9847_908371bef67c.slice. Jul 7 09:24:12.304703 kubelet[2879]: I0707 09:24:12.304646 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxkcp\" (UniqueName: \"kubernetes.io/projected/7fca043e-2b76-4ab2-9847-908371bef67c-kube-api-access-pxkcp\") pod \"cilium-operator-5d85765b45-rrk55\" (UID: \"7fca043e-2b76-4ab2-9847-908371bef67c\") " pod="kube-system/cilium-operator-5d85765b45-rrk55" Jul 7 09:24:12.305172 kubelet[2879]: I0707 09:24:12.305055 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fca043e-2b76-4ab2-9847-908371bef67c-cilium-config-path\") pod \"cilium-operator-5d85765b45-rrk55\" (UID: \"7fca043e-2b76-4ab2-9847-908371bef67c\") " pod="kube-system/cilium-operator-5d85765b45-rrk55" Jul 7 09:24:13.207890 kubelet[2879]: E0707 09:24:13.206338 2879 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 7 09:24:13.207890 kubelet[2879]: E0707 09:24:13.206522 2879 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9c920051-d610-4379-bc09-43addb2b185b-kube-proxy podName:9c920051-d610-4379-bc09-43addb2b185b nodeName:}" failed. No retries permitted until 2025-07-07 09:24:13.706480644 +0000 UTC m=+6.116589048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9c920051-d610-4379-bc09-43addb2b185b-kube-proxy") pod "kube-proxy-p74d6" (UID: "9c920051-d610-4379-bc09-43addb2b185b") : failed to sync configmap cache: timed out waiting for the condition Jul 7 09:24:13.207890 kubelet[2879]: E0707 09:24:13.206997 2879 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 7 09:24:13.207890 kubelet[2879]: E0707 09:24:13.207054 2879 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-config-path podName:90fea754-6b34-4c24-aafd-77026c66f4fe nodeName:}" failed. No retries permitted until 2025-07-07 09:24:13.707040253 +0000 UTC m=+6.117148650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-config-path") pod "cilium-bffsr" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe") : failed to sync configmap cache: timed out waiting for the condition Jul 7 09:24:13.407820 kubelet[2879]: E0707 09:24:13.407236 2879 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 7 09:24:13.407820 kubelet[2879]: E0707 09:24:13.407443 2879 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7fca043e-2b76-4ab2-9847-908371bef67c-cilium-config-path podName:7fca043e-2b76-4ab2-9847-908371bef67c nodeName:}" failed. No retries permitted until 2025-07-07 09:24:13.907368743 +0000 UTC m=+6.317477127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/7fca043e-2b76-4ab2-9847-908371bef67c-cilium-config-path") pod "cilium-operator-5d85765b45-rrk55" (UID: "7fca043e-2b76-4ab2-9847-908371bef67c") : failed to sync configmap cache: timed out waiting for the condition Jul 7 09:24:13.865652 containerd[1593]: time="2025-07-07T09:24:13.865550593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p74d6,Uid:9c920051-d610-4379-bc09-43addb2b185b,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:13.878052 containerd[1593]: time="2025-07-07T09:24:13.877746372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bffsr,Uid:90fea754-6b34-4c24-aafd-77026c66f4fe,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:13.912577 containerd[1593]: time="2025-07-07T09:24:13.912503190Z" level=info msg="connecting to shim 3214922e6f8c3612b93aee0e195367e8548411486b35c0ff2b00042494673cdc" address="unix:///run/containerd/s/871742baa4ba27ea7dedba31f74ff148a69ef897d074b1dd114c42179ccee716" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:13.927337 containerd[1593]: time="2025-07-07T09:24:13.927199367Z" level=info msg="connecting to shim 4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37" address="unix:///run/containerd/s/286a551dcebcee98838c12016ee15db42d04defcf2da36dc20549416aba9960b" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:13.983428 systemd[1]: Started cri-containerd-3214922e6f8c3612b93aee0e195367e8548411486b35c0ff2b00042494673cdc.scope - libcontainer container 3214922e6f8c3612b93aee0e195367e8548411486b35c0ff2b00042494673cdc. Jul 7 09:24:13.986135 systemd[1]: Started cri-containerd-4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37.scope - libcontainer container 4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37. Jul 7 09:24:14.048401 containerd[1593]: time="2025-07-07T09:24:14.048292936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bffsr,Uid:90fea754-6b34-4c24-aafd-77026c66f4fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\"" Jul 7 09:24:14.051151 containerd[1593]: time="2025-07-07T09:24:14.050961494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p74d6,Uid:9c920051-d610-4379-bc09-43addb2b185b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3214922e6f8c3612b93aee0e195367e8548411486b35c0ff2b00042494673cdc\"" Jul 7 09:24:14.055705 containerd[1593]: time="2025-07-07T09:24:14.055605764Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 09:24:14.058125 containerd[1593]: time="2025-07-07T09:24:14.057995260Z" level=info msg="CreateContainer within sandbox \"3214922e6f8c3612b93aee0e195367e8548411486b35c0ff2b00042494673cdc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 09:24:14.079245 containerd[1593]: time="2025-07-07T09:24:14.079006221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rrk55,Uid:7fca043e-2b76-4ab2-9847-908371bef67c,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:14.085390 containerd[1593]: time="2025-07-07T09:24:14.085352389Z" level=info msg="Container 2d39905dba589a5845b9b40ef74baf889184675e64325b841bfef444b75bc547: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:14.095549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524258707.mount: Deactivated successfully. Jul 7 09:24:14.104846 containerd[1593]: time="2025-07-07T09:24:14.103154933Z" level=info msg="connecting to shim 498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593" address="unix:///run/containerd/s/dbe94032a745aeba822ec6e5d651af8ba49fcaa83062ea058c9a988bf047b60e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:14.112277 containerd[1593]: time="2025-07-07T09:24:14.112214266Z" level=info msg="CreateContainer within sandbox \"3214922e6f8c3612b93aee0e195367e8548411486b35c0ff2b00042494673cdc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d39905dba589a5845b9b40ef74baf889184675e64325b841bfef444b75bc547\"" Jul 7 09:24:14.114732 containerd[1593]: time="2025-07-07T09:24:14.114645108Z" level=info msg="StartContainer for \"2d39905dba589a5845b9b40ef74baf889184675e64325b841bfef444b75bc547\"" Jul 7 09:24:14.119308 containerd[1593]: time="2025-07-07T09:24:14.118901175Z" level=info msg="connecting to shim 2d39905dba589a5845b9b40ef74baf889184675e64325b841bfef444b75bc547" address="unix:///run/containerd/s/871742baa4ba27ea7dedba31f74ff148a69ef897d074b1dd114c42179ccee716" protocol=ttrpc version=3 Jul 7 09:24:14.156539 systemd[1]: Started cri-containerd-498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593.scope - libcontainer container 498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593. Jul 7 09:24:14.164551 systemd[1]: Started cri-containerd-2d39905dba589a5845b9b40ef74baf889184675e64325b841bfef444b75bc547.scope - libcontainer container 2d39905dba589a5845b9b40ef74baf889184675e64325b841bfef444b75bc547. Jul 7 09:24:14.273452 containerd[1593]: time="2025-07-07T09:24:14.273343901Z" level=info msg="StartContainer for \"2d39905dba589a5845b9b40ef74baf889184675e64325b841bfef444b75bc547\" returns successfully" Jul 7 09:24:14.281564 containerd[1593]: time="2025-07-07T09:24:14.281425827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rrk55,Uid:7fca043e-2b76-4ab2-9847-908371bef67c,Namespace:kube-system,Attempt:0,} returns sandbox id \"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\"" Jul 7 09:24:14.942604 kubelet[2879]: I0707 09:24:14.942527 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p74d6" podStartSLOduration=3.942504443 podStartE2EDuration="3.942504443s" podCreationTimestamp="2025-07-07 09:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:24:14.939281649 +0000 UTC m=+7.349390074" watchObservedRunningTime="2025-07-07 09:24:14.942504443 +0000 UTC m=+7.352612874" Jul 7 09:24:21.318668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037381921.mount: Deactivated successfully. Jul 7 09:24:24.537438 containerd[1593]: time="2025-07-07T09:24:24.537339334Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:24:24.540005 containerd[1593]: time="2025-07-07T09:24:24.539968655Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 7 09:24:24.542154 containerd[1593]: time="2025-07-07T09:24:24.541846277Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:24:24.547433 containerd[1593]: time="2025-07-07T09:24:24.547033257Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.491341693s" Jul 7 09:24:24.547433 containerd[1593]: time="2025-07-07T09:24:24.547092790Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 7 09:24:24.550711 containerd[1593]: time="2025-07-07T09:24:24.550677527Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 09:24:24.556111 containerd[1593]: time="2025-07-07T09:24:24.555873356Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 09:24:24.578458 containerd[1593]: time="2025-07-07T09:24:24.578243836Z" level=info msg="Container 17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:24.588822 containerd[1593]: time="2025-07-07T09:24:24.588741903Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\"" Jul 7 09:24:24.590541 containerd[1593]: time="2025-07-07T09:24:24.590425016Z" level=info msg="StartContainer for \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\"" Jul 7 09:24:24.594041 containerd[1593]: time="2025-07-07T09:24:24.593458205Z" level=info msg="connecting to shim 17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4" address="unix:///run/containerd/s/286a551dcebcee98838c12016ee15db42d04defcf2da36dc20549416aba9960b" protocol=ttrpc version=3 Jul 7 09:24:24.635349 systemd[1]: Started cri-containerd-17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4.scope - libcontainer container 17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4. Jul 7 09:24:24.681427 containerd[1593]: time="2025-07-07T09:24:24.681374976Z" level=info msg="StartContainer for \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" returns successfully" Jul 7 09:24:24.700069 systemd[1]: cri-containerd-17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4.scope: Deactivated successfully. Jul 7 09:24:24.701125 systemd[1]: cri-containerd-17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4.scope: Consumed 31ms CPU time, 6.6M memory peak, 4K read from disk, 3.2M written to disk. Jul 7 09:24:24.745818 containerd[1593]: time="2025-07-07T09:24:24.742793804Z" level=info msg="received exit event container_id:\"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" id:\"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" pid:3296 exited_at:{seconds:1751880264 nanos:706384314}" Jul 7 09:24:24.758485 containerd[1593]: time="2025-07-07T09:24:24.758428052Z" level=info msg="TaskExit event in podsandbox handler container_id:\"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" id:\"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" pid:3296 exited_at:{seconds:1751880264 nanos:706384314}" Jul 7 09:24:25.576382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4-rootfs.mount: Deactivated successfully. Jul 7 09:24:25.962982 containerd[1593]: time="2025-07-07T09:24:25.962876741Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 09:24:25.980005 containerd[1593]: time="2025-07-07T09:24:25.979943992Z" level=info msg="Container b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:26.000205 containerd[1593]: time="2025-07-07T09:24:26.000137856Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\"" Jul 7 09:24:26.001749 containerd[1593]: time="2025-07-07T09:24:26.001718250Z" level=info msg="StartContainer for \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\"" Jul 7 09:24:26.003397 containerd[1593]: time="2025-07-07T09:24:26.003365624Z" level=info msg="connecting to shim b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372" address="unix:///run/containerd/s/286a551dcebcee98838c12016ee15db42d04defcf2da36dc20549416aba9960b" protocol=ttrpc version=3 Jul 7 09:24:26.053087 systemd[1]: Started cri-containerd-b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372.scope - libcontainer container b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372. Jul 7 09:24:26.129573 containerd[1593]: time="2025-07-07T09:24:26.129440637Z" level=info msg="StartContainer for \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" returns successfully" Jul 7 09:24:26.150623 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 09:24:26.150993 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:24:26.151891 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:24:26.156514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 09:24:26.160130 containerd[1593]: time="2025-07-07T09:24:26.159865426Z" level=info msg="received exit event container_id:\"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" id:\"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" pid:3340 exited_at:{seconds:1751880266 nanos:159527128}" Jul 7 09:24:26.160720 containerd[1593]: time="2025-07-07T09:24:26.160690313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" id:\"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" pid:3340 exited_at:{seconds:1751880266 nanos:159527128}" Jul 7 09:24:26.161067 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 09:24:26.161772 systemd[1]: cri-containerd-b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372.scope: Deactivated successfully. Jul 7 09:24:26.210434 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 09:24:26.577772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372-rootfs.mount: Deactivated successfully. Jul 7 09:24:26.973394 containerd[1593]: time="2025-07-07T09:24:26.971493676Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 09:24:27.022707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006899163.mount: Deactivated successfully. Jul 7 09:24:27.026884 containerd[1593]: time="2025-07-07T09:24:27.025481429Z" level=info msg="Container f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:27.055637 containerd[1593]: time="2025-07-07T09:24:27.055559524Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\"" Jul 7 09:24:27.057753 containerd[1593]: time="2025-07-07T09:24:27.057714532Z" level=info msg="StartContainer for \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\"" Jul 7 09:24:27.064697 containerd[1593]: time="2025-07-07T09:24:27.063246321Z" level=info msg="connecting to shim f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0" address="unix:///run/containerd/s/286a551dcebcee98838c12016ee15db42d04defcf2da36dc20549416aba9960b" protocol=ttrpc version=3 Jul 7 09:24:27.113368 systemd[1]: Started cri-containerd-f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0.scope - libcontainer container f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0. Jul 7 09:24:27.251253 containerd[1593]: time="2025-07-07T09:24:27.251059376Z" level=info msg="StartContainer for \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" returns successfully" Jul 7 09:24:27.253632 systemd[1]: cri-containerd-f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0.scope: Deactivated successfully. Jul 7 09:24:27.254201 systemd[1]: cri-containerd-f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0.scope: Consumed 41ms CPU time, 5.5M memory peak, 1M read from disk. Jul 7 09:24:27.258015 containerd[1593]: time="2025-07-07T09:24:27.257961529Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" id:\"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" pid:3399 exited_at:{seconds:1751880267 nanos:255762791}" Jul 7 09:24:27.258833 containerd[1593]: time="2025-07-07T09:24:27.258745638Z" level=info msg="received exit event container_id:\"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" id:\"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" pid:3399 exited_at:{seconds:1751880267 nanos:255762791}" Jul 7 09:24:27.576605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0-rootfs.mount: Deactivated successfully. Jul 7 09:24:27.910820 containerd[1593]: time="2025-07-07T09:24:27.910697791Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:24:27.914830 containerd[1593]: time="2025-07-07T09:24:27.914747368Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 7 09:24:27.918553 containerd[1593]: time="2025-07-07T09:24:27.918372287Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 09:24:27.921900 containerd[1593]: time="2025-07-07T09:24:27.921676940Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.370836887s" Jul 7 09:24:27.921900 containerd[1593]: time="2025-07-07T09:24:27.921741977Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 7 09:24:27.926171 containerd[1593]: time="2025-07-07T09:24:27.925327740Z" level=info msg="CreateContainer within sandbox \"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 09:24:27.959335 containerd[1593]: time="2025-07-07T09:24:27.956913962Z" level=info msg="Container fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:27.961558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1563320830.mount: Deactivated successfully. Jul 7 09:24:27.988193 containerd[1593]: time="2025-07-07T09:24:27.986336976Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 09:24:27.994242 containerd[1593]: time="2025-07-07T09:24:27.994011412Z" level=info msg="CreateContainer within sandbox \"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\"" Jul 7 09:24:27.996623 containerd[1593]: time="2025-07-07T09:24:27.996439076Z" level=info msg="StartContainer for \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\"" Jul 7 09:24:28.000564 containerd[1593]: time="2025-07-07T09:24:28.000505700Z" level=info msg="connecting to shim fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9" address="unix:///run/containerd/s/dbe94032a745aeba822ec6e5d651af8ba49fcaa83062ea058c9a988bf047b60e" protocol=ttrpc version=3 Jul 7 09:24:28.052810 systemd[1]: Started cri-containerd-fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9.scope - libcontainer container fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9. Jul 7 09:24:28.059130 containerd[1593]: time="2025-07-07T09:24:28.058594536Z" level=info msg="Container bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:28.081150 containerd[1593]: time="2025-07-07T09:24:28.081067032Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\"" Jul 7 09:24:28.083010 containerd[1593]: time="2025-07-07T09:24:28.082107524Z" level=info msg="StartContainer for \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\"" Jul 7 09:24:28.084028 containerd[1593]: time="2025-07-07T09:24:28.083989770Z" level=info msg="connecting to shim bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383" address="unix:///run/containerd/s/286a551dcebcee98838c12016ee15db42d04defcf2da36dc20549416aba9960b" protocol=ttrpc version=3 Jul 7 09:24:28.146217 systemd[1]: Started cri-containerd-bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383.scope - libcontainer container bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383. Jul 7 09:24:28.256599 containerd[1593]: time="2025-07-07T09:24:28.255741311Z" level=info msg="StartContainer for \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" returns successfully" Jul 7 09:24:28.275658 systemd[1]: cri-containerd-bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383.scope: Deactivated successfully. Jul 7 09:24:28.277832 containerd[1593]: time="2025-07-07T09:24:28.277785274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" id:\"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" pid:3461 exited_at:{seconds:1751880268 nanos:277408289}" Jul 7 09:24:28.279348 containerd[1593]: time="2025-07-07T09:24:28.279312647Z" level=info msg="received exit event container_id:\"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" id:\"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" pid:3461 exited_at:{seconds:1751880268 nanos:277408289}" Jul 7 09:24:28.282307 containerd[1593]: time="2025-07-07T09:24:28.282276445Z" level=info msg="StartContainer for \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" returns successfully" Jul 7 09:24:28.989274 containerd[1593]: time="2025-07-07T09:24:28.987777752Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 09:24:29.018318 containerd[1593]: time="2025-07-07T09:24:29.018202092Z" level=info msg="Container adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:29.022870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1187165388.mount: Deactivated successfully. Jul 7 09:24:29.037763 containerd[1593]: time="2025-07-07T09:24:29.037701472Z" level=info msg="CreateContainer within sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\"" Jul 7 09:24:29.049967 containerd[1593]: time="2025-07-07T09:24:29.038863472Z" level=info msg="StartContainer for \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\"" Jul 7 09:24:29.049967 containerd[1593]: time="2025-07-07T09:24:29.042465490Z" level=info msg="connecting to shim adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0" address="unix:///run/containerd/s/286a551dcebcee98838c12016ee15db42d04defcf2da36dc20549416aba9960b" protocol=ttrpc version=3 Jul 7 09:24:29.103209 systemd[1]: Started cri-containerd-adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0.scope - libcontainer container adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0. Jul 7 09:24:29.253866 containerd[1593]: time="2025-07-07T09:24:29.252855063Z" level=info msg="StartContainer for \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" returns successfully" Jul 7 09:24:29.541468 containerd[1593]: time="2025-07-07T09:24:29.540969883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" id:\"4f8924baf11be522a97223bb39b7eccc23171b18d542ef385b5691103e97d3a6\" pid:3539 exited_at:{seconds:1751880269 nanos:540607279}" Jul 7 09:24:29.616791 kubelet[2879]: I0707 09:24:29.616674 2879 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 09:24:29.663978 kubelet[2879]: I0707 09:24:29.663876 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-rrk55" podStartSLOduration=4.024988281 podStartE2EDuration="17.663828746s" podCreationTimestamp="2025-07-07 09:24:12 +0000 UTC" firstStartedPulling="2025-07-07 09:24:14.284602401 +0000 UTC m=+6.694710796" lastFinishedPulling="2025-07-07 09:24:27.923442867 +0000 UTC m=+20.333551261" observedRunningTime="2025-07-07 09:24:29.150426279 +0000 UTC m=+21.560534684" watchObservedRunningTime="2025-07-07 09:24:29.663828746 +0000 UTC m=+22.073937138" Jul 7 09:24:29.682247 systemd[1]: Created slice kubepods-burstable-pod238ca264_8cb5_437e_b42f_f8aa7019cf84.slice - libcontainer container kubepods-burstable-pod238ca264_8cb5_437e_b42f_f8aa7019cf84.slice. Jul 7 09:24:29.696950 systemd[1]: Created slice kubepods-burstable-pod3a1801f2_1cc8_418e_9d96_a71f95d46d35.slice - libcontainer container kubepods-burstable-pod3a1801f2_1cc8_418e_9d96_a71f95d46d35.slice. Jul 7 09:24:29.806714 kubelet[2879]: I0707 09:24:29.805933 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a1801f2-1cc8-418e-9d96-a71f95d46d35-config-volume\") pod \"coredns-7c65d6cfc9-2n7xz\" (UID: \"3a1801f2-1cc8-418e-9d96-a71f95d46d35\") " pod="kube-system/coredns-7c65d6cfc9-2n7xz" Jul 7 09:24:29.806714 kubelet[2879]: I0707 09:24:29.806011 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2f2\" (UniqueName: \"kubernetes.io/projected/3a1801f2-1cc8-418e-9d96-a71f95d46d35-kube-api-access-wb2f2\") pod \"coredns-7c65d6cfc9-2n7xz\" (UID: \"3a1801f2-1cc8-418e-9d96-a71f95d46d35\") " pod="kube-system/coredns-7c65d6cfc9-2n7xz" Jul 7 09:24:29.806714 kubelet[2879]: I0707 09:24:29.806081 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/238ca264-8cb5-437e-b42f-f8aa7019cf84-config-volume\") pod \"coredns-7c65d6cfc9-klszh\" (UID: \"238ca264-8cb5-437e-b42f-f8aa7019cf84\") " pod="kube-system/coredns-7c65d6cfc9-klszh" Jul 7 09:24:29.806714 kubelet[2879]: I0707 09:24:29.806138 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghzv\" (UniqueName: \"kubernetes.io/projected/238ca264-8cb5-437e-b42f-f8aa7019cf84-kube-api-access-2ghzv\") pod \"coredns-7c65d6cfc9-klszh\" (UID: \"238ca264-8cb5-437e-b42f-f8aa7019cf84\") " pod="kube-system/coredns-7c65d6cfc9-klszh" Jul 7 09:24:29.993814 containerd[1593]: time="2025-07-07T09:24:29.993752132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-klszh,Uid:238ca264-8cb5-437e-b42f-f8aa7019cf84,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:30.004306 containerd[1593]: time="2025-07-07T09:24:30.004227719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2n7xz,Uid:3a1801f2-1cc8-418e-9d96-a71f95d46d35,Namespace:kube-system,Attempt:0,}" Jul 7 09:24:32.366547 systemd-networkd[1516]: cilium_host: Link UP Jul 7 09:24:32.367275 systemd-networkd[1516]: cilium_net: Link UP Jul 7 09:24:32.367689 systemd-networkd[1516]: cilium_host: Gained carrier Jul 7 09:24:32.367967 systemd-networkd[1516]: cilium_net: Gained carrier Jul 7 09:24:32.411628 systemd-networkd[1516]: cilium_host: Gained IPv6LL Jul 7 09:24:32.546074 systemd-networkd[1516]: cilium_vxlan: Link UP Jul 7 09:24:32.546085 systemd-networkd[1516]: cilium_vxlan: Gained carrier Jul 7 09:24:32.932768 systemd-networkd[1516]: cilium_net: Gained IPv6LL Jul 7 09:24:33.197131 kernel: NET: Registered PF_ALG protocol family Jul 7 09:24:33.764606 systemd-networkd[1516]: cilium_vxlan: Gained IPv6LL Jul 7 09:24:34.254409 systemd-networkd[1516]: lxc_health: Link UP Jul 7 09:24:34.294014 systemd-networkd[1516]: lxc_health: Gained carrier Jul 7 09:24:34.618773 systemd-networkd[1516]: lxc72b6c619a099: Link UP Jul 7 09:24:34.647619 kernel: eth0: renamed from tmpadcef Jul 7 09:24:34.671417 systemd-networkd[1516]: lxcf0530b3dafe2: Link UP Jul 7 09:24:34.679133 kernel: eth0: renamed from tmpf00cf Jul 7 09:24:34.678299 systemd-networkd[1516]: lxc72b6c619a099: Gained carrier Jul 7 09:24:34.690227 systemd-networkd[1516]: lxcf0530b3dafe2: Gained carrier Jul 7 09:24:35.748323 systemd-networkd[1516]: lxc_health: Gained IPv6LL Jul 7 09:24:35.812383 systemd-networkd[1516]: lxc72b6c619a099: Gained IPv6LL Jul 7 09:24:35.931487 kubelet[2879]: I0707 09:24:35.931140 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bffsr" podStartSLOduration=14.433598657 podStartE2EDuration="24.930942245s" podCreationTimestamp="2025-07-07 09:24:11 +0000 UTC" firstStartedPulling="2025-07-07 09:24:14.052464558 +0000 UTC m=+6.462572949" lastFinishedPulling="2025-07-07 09:24:24.549808134 +0000 UTC m=+16.959916537" observedRunningTime="2025-07-07 09:24:30.064749497 +0000 UTC m=+22.474857932" watchObservedRunningTime="2025-07-07 09:24:35.930942245 +0000 UTC m=+28.341050660" Jul 7 09:24:36.324306 systemd-networkd[1516]: lxcf0530b3dafe2: Gained IPv6LL Jul 7 09:24:40.356727 containerd[1593]: time="2025-07-07T09:24:40.356652914Z" level=info msg="connecting to shim adcefc603cce24f133bfdad493edd6db697fff732835f2cd1020f665a383f2e0" address="unix:///run/containerd/s/7e8da9fd6482a3959efd97c754b7e0d295540626ea4e701ccd80ffb662c55163" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:40.358292 containerd[1593]: time="2025-07-07T09:24:40.358252163Z" level=info msg="connecting to shim f00cf71f8b4521aa03196f6862a9ca16e0624f13a621951049bbd293065e4ad8" address="unix:///run/containerd/s/a3a99ed9eeb86e4b175cc6b8cc4a9405fec691ff86df90ae9e150e6b4bc02407" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:24:40.425564 systemd[1]: Started cri-containerd-adcefc603cce24f133bfdad493edd6db697fff732835f2cd1020f665a383f2e0.scope - libcontainer container adcefc603cce24f133bfdad493edd6db697fff732835f2cd1020f665a383f2e0. Jul 7 09:24:40.454802 systemd[1]: Started cri-containerd-f00cf71f8b4521aa03196f6862a9ca16e0624f13a621951049bbd293065e4ad8.scope - libcontainer container f00cf71f8b4521aa03196f6862a9ca16e0624f13a621951049bbd293065e4ad8. Jul 7 09:24:40.565458 containerd[1593]: time="2025-07-07T09:24:40.565375803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2n7xz,Uid:3a1801f2-1cc8-418e-9d96-a71f95d46d35,Namespace:kube-system,Attempt:0,} returns sandbox id \"adcefc603cce24f133bfdad493edd6db697fff732835f2cd1020f665a383f2e0\"" Jul 7 09:24:40.580391 containerd[1593]: time="2025-07-07T09:24:40.579687320Z" level=info msg="CreateContainer within sandbox \"adcefc603cce24f133bfdad493edd6db697fff732835f2cd1020f665a383f2e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 09:24:40.625238 containerd[1593]: time="2025-07-07T09:24:40.624586454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-klszh,Uid:238ca264-8cb5-437e-b42f-f8aa7019cf84,Namespace:kube-system,Attempt:0,} returns sandbox id \"f00cf71f8b4521aa03196f6862a9ca16e0624f13a621951049bbd293065e4ad8\"" Jul 7 09:24:40.629136 containerd[1593]: time="2025-07-07T09:24:40.628345744Z" level=info msg="CreateContainer within sandbox \"f00cf71f8b4521aa03196f6862a9ca16e0624f13a621951049bbd293065e4ad8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 09:24:40.629530 containerd[1593]: time="2025-07-07T09:24:40.629495604Z" level=info msg="Container 89e9409ea1441f375d8a065cce8fa596aa568d3eff964f9e210ef1d34c851bd9: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:40.631461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798055592.mount: Deactivated successfully. Jul 7 09:24:40.644439 containerd[1593]: time="2025-07-07T09:24:40.644381110Z" level=info msg="Container 34b06b1cd839803bc2ec6281f843f9215e2492dbd82e285fbaaf950d67ea2ebb: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:24:40.649425 containerd[1593]: time="2025-07-07T09:24:40.649387852Z" level=info msg="CreateContainer within sandbox \"adcefc603cce24f133bfdad493edd6db697fff732835f2cd1020f665a383f2e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89e9409ea1441f375d8a065cce8fa596aa568d3eff964f9e210ef1d34c851bd9\"" Jul 7 09:24:40.650455 containerd[1593]: time="2025-07-07T09:24:40.650417202Z" level=info msg="StartContainer for \"89e9409ea1441f375d8a065cce8fa596aa568d3eff964f9e210ef1d34c851bd9\"" Jul 7 09:24:40.652323 containerd[1593]: time="2025-07-07T09:24:40.652284439Z" level=info msg="connecting to shim 89e9409ea1441f375d8a065cce8fa596aa568d3eff964f9e210ef1d34c851bd9" address="unix:///run/containerd/s/7e8da9fd6482a3959efd97c754b7e0d295540626ea4e701ccd80ffb662c55163" protocol=ttrpc version=3 Jul 7 09:24:40.660874 containerd[1593]: time="2025-07-07T09:24:40.660817614Z" level=info msg="CreateContainer within sandbox \"f00cf71f8b4521aa03196f6862a9ca16e0624f13a621951049bbd293065e4ad8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34b06b1cd839803bc2ec6281f843f9215e2492dbd82e285fbaaf950d67ea2ebb\"" Jul 7 09:24:40.664286 containerd[1593]: time="2025-07-07T09:24:40.664257690Z" level=info msg="StartContainer for \"34b06b1cd839803bc2ec6281f843f9215e2492dbd82e285fbaaf950d67ea2ebb\"" Jul 7 09:24:40.668219 containerd[1593]: time="2025-07-07T09:24:40.668155650Z" level=info msg="connecting to shim 34b06b1cd839803bc2ec6281f843f9215e2492dbd82e285fbaaf950d67ea2ebb" address="unix:///run/containerd/s/a3a99ed9eeb86e4b175cc6b8cc4a9405fec691ff86df90ae9e150e6b4bc02407" protocol=ttrpc version=3 Jul 7 09:24:40.693428 systemd[1]: Started cri-containerd-89e9409ea1441f375d8a065cce8fa596aa568d3eff964f9e210ef1d34c851bd9.scope - libcontainer container 89e9409ea1441f375d8a065cce8fa596aa568d3eff964f9e210ef1d34c851bd9. Jul 7 09:24:40.711400 systemd[1]: Started cri-containerd-34b06b1cd839803bc2ec6281f843f9215e2492dbd82e285fbaaf950d67ea2ebb.scope - libcontainer container 34b06b1cd839803bc2ec6281f843f9215e2492dbd82e285fbaaf950d67ea2ebb. Jul 7 09:24:40.840375 containerd[1593]: time="2025-07-07T09:24:40.840281800Z" level=info msg="StartContainer for \"89e9409ea1441f375d8a065cce8fa596aa568d3eff964f9e210ef1d34c851bd9\" returns successfully" Jul 7 09:24:40.841812 containerd[1593]: time="2025-07-07T09:24:40.841748375Z" level=info msg="StartContainer for \"34b06b1cd839803bc2ec6281f843f9215e2492dbd82e285fbaaf950d67ea2ebb\" returns successfully" Jul 7 09:24:41.164286 kubelet[2879]: I0707 09:24:41.162901 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-klszh" podStartSLOduration=29.16288248 podStartE2EDuration="29.16288248s" podCreationTimestamp="2025-07-07 09:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:24:41.159763985 +0000 UTC m=+33.569872400" watchObservedRunningTime="2025-07-07 09:24:41.16288248 +0000 UTC m=+33.572990886" Jul 7 09:24:41.187456 kubelet[2879]: I0707 09:24:41.187374 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2n7xz" podStartSLOduration=29.187346841 podStartE2EDuration="29.187346841s" podCreationTimestamp="2025-07-07 09:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:24:41.186262063 +0000 UTC m=+33.596370492" watchObservedRunningTime="2025-07-07 09:24:41.187346841 +0000 UTC m=+33.597455247" Jul 7 09:25:24.737888 systemd[1]: Started sshd@9-10.243.72.42:22-139.178.89.65:60330.service - OpenSSH per-connection server daemon (139.178.89.65:60330). Jul 7 09:25:25.704910 sshd[4199]: Accepted publickey for core from 139.178.89.65 port 60330 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:25:25.707827 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:25:25.727252 systemd-logind[1566]: New session 12 of user core. Jul 7 09:25:25.737315 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 09:25:26.965854 sshd[4201]: Connection closed by 139.178.89.65 port 60330 Jul 7 09:25:26.966817 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Jul 7 09:25:26.983513 systemd[1]: sshd@9-10.243.72.42:22-139.178.89.65:60330.service: Deactivated successfully. Jul 7 09:25:26.986582 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 09:25:26.988539 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Jul 7 09:25:26.991252 systemd-logind[1566]: Removed session 12. Jul 7 09:25:32.124857 systemd[1]: Started sshd@10-10.243.72.42:22-139.178.89.65:40120.service - OpenSSH per-connection server daemon (139.178.89.65:40120). Jul 7 09:25:33.034232 sshd[4215]: Accepted publickey for core from 139.178.89.65 port 40120 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:25:33.036341 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:25:33.051186 systemd-logind[1566]: New session 13 of user core. Jul 7 09:25:33.056331 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 09:25:33.768424 sshd[4217]: Connection closed by 139.178.89.65 port 40120 Jul 7 09:25:33.769588 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jul 7 09:25:33.775872 systemd[1]: sshd@10-10.243.72.42:22-139.178.89.65:40120.service: Deactivated successfully. Jul 7 09:25:33.779432 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 09:25:33.781246 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Jul 7 09:25:33.784285 systemd-logind[1566]: Removed session 13. Jul 7 09:25:38.929720 systemd[1]: Started sshd@11-10.243.72.42:22-139.178.89.65:40130.service - OpenSSH per-connection server daemon (139.178.89.65:40130). Jul 7 09:25:39.883244 sshd[4230]: Accepted publickey for core from 139.178.89.65 port 40130 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:25:39.884997 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:25:39.893844 systemd-logind[1566]: New session 14 of user core. Jul 7 09:25:39.897375 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 09:25:40.608083 sshd[4232]: Connection closed by 139.178.89.65 port 40130 Jul 7 09:25:40.609382 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jul 7 09:25:40.615600 systemd[1]: sshd@11-10.243.72.42:22-139.178.89.65:40130.service: Deactivated successfully. Jul 7 09:25:40.619019 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 09:25:40.621332 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Jul 7 09:25:40.624039 systemd-logind[1566]: Removed session 14. Jul 7 09:25:45.764170 systemd[1]: Started sshd@12-10.243.72.42:22-139.178.89.65:49368.service - OpenSSH per-connection server daemon (139.178.89.65:49368). Jul 7 09:25:46.674649 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 49368 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:25:46.676772 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:25:46.685497 systemd-logind[1566]: New session 15 of user core. Jul 7 09:25:46.695469 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 09:25:47.377736 sshd[4251]: Connection closed by 139.178.89.65 port 49368 Jul 7 09:25:47.378889 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jul 7 09:25:47.385286 systemd[1]: sshd@12-10.243.72.42:22-139.178.89.65:49368.service: Deactivated successfully. Jul 7 09:25:47.388181 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 09:25:47.390682 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Jul 7 09:25:47.392564 systemd-logind[1566]: Removed session 15. Jul 7 09:25:47.537167 systemd[1]: Started sshd@13-10.243.72.42:22-139.178.89.65:49374.service - OpenSSH per-connection server daemon (139.178.89.65:49374). Jul 7 09:25:48.439565 sshd[4264]: Accepted publickey for core from 139.178.89.65 port 49374 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:25:48.441870 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:25:48.449591 systemd-logind[1566]: New session 16 of user core. Jul 7 09:25:48.458408 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 09:25:49.198175 sshd[4266]: Connection closed by 139.178.89.65 port 49374 Jul 7 09:25:49.198904 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Jul 7 09:25:49.206050 systemd[1]: sshd@13-10.243.72.42:22-139.178.89.65:49374.service: Deactivated successfully. Jul 7 09:25:49.210266 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 09:25:49.211830 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Jul 7 09:25:49.214071 systemd-logind[1566]: Removed session 16. Jul 7 09:25:49.359253 systemd[1]: Started sshd@14-10.243.72.42:22-139.178.89.65:49378.service - OpenSSH per-connection server daemon (139.178.89.65:49378). Jul 7 09:25:50.278349 sshd[4275]: Accepted publickey for core from 139.178.89.65 port 49378 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:25:50.280547 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:25:50.290470 systemd-logind[1566]: New session 17 of user core. Jul 7 09:25:50.298381 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 09:25:50.988901 sshd[4277]: Connection closed by 139.178.89.65 port 49378 Jul 7 09:25:50.990001 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jul 7 09:25:50.995938 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Jul 7 09:25:50.996564 systemd[1]: sshd@14-10.243.72.42:22-139.178.89.65:49378.service: Deactivated successfully. Jul 7 09:25:50.999812 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 09:25:51.002654 systemd-logind[1566]: Removed session 17. Jul 7 09:25:56.146902 systemd[1]: Started sshd@15-10.243.72.42:22-139.178.89.65:53148.service - OpenSSH per-connection server daemon (139.178.89.65:53148). Jul 7 09:25:57.053420 sshd[4289]: Accepted publickey for core from 139.178.89.65 port 53148 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:25:57.056093 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:25:57.064054 systemd-logind[1566]: New session 18 of user core. Jul 7 09:25:57.074449 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 09:25:57.758266 sshd[4291]: Connection closed by 139.178.89.65 port 53148 Jul 7 09:25:57.759269 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jul 7 09:25:57.764886 systemd[1]: sshd@15-10.243.72.42:22-139.178.89.65:53148.service: Deactivated successfully. Jul 7 09:25:57.767881 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 09:25:57.769571 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Jul 7 09:25:57.772330 systemd-logind[1566]: Removed session 18. Jul 7 09:26:02.916780 systemd[1]: Started sshd@16-10.243.72.42:22-139.178.89.65:43610.service - OpenSSH per-connection server daemon (139.178.89.65:43610). Jul 7 09:26:03.830911 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 43610 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:03.833729 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:03.842537 systemd-logind[1566]: New session 19 of user core. Jul 7 09:26:03.847290 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 09:26:04.533281 sshd[4304]: Connection closed by 139.178.89.65 port 43610 Jul 7 09:26:04.534247 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:04.539235 systemd[1]: sshd@16-10.243.72.42:22-139.178.89.65:43610.service: Deactivated successfully. Jul 7 09:26:04.541373 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 09:26:04.542698 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Jul 7 09:26:04.544868 systemd-logind[1566]: Removed session 19. Jul 7 09:26:04.686797 systemd[1]: Started sshd@17-10.243.72.42:22-139.178.89.65:43624.service - OpenSSH per-connection server daemon (139.178.89.65:43624). Jul 7 09:26:05.600886 sshd[4316]: Accepted publickey for core from 139.178.89.65 port 43624 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:05.602224 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:05.610750 systemd-logind[1566]: New session 20 of user core. Jul 7 09:26:05.618638 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 09:26:05.649027 update_engine[1568]: I20250707 09:26:05.648272 1568 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 7 09:26:05.649027 update_engine[1568]: I20250707 09:26:05.648389 1568 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 7 09:26:05.652637 update_engine[1568]: I20250707 09:26:05.652580 1568 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 7 09:26:05.653849 update_engine[1568]: I20250707 09:26:05.653790 1568 omaha_request_params.cc:62] Current group set to alpha Jul 7 09:26:05.654141 update_engine[1568]: I20250707 09:26:05.654029 1568 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 7 09:26:05.654141 update_engine[1568]: I20250707 09:26:05.654054 1568 update_attempter.cc:643] Scheduling an action processor start. Jul 7 09:26:05.654268 update_engine[1568]: I20250707 09:26:05.654168 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 09:26:05.654313 update_engine[1568]: I20250707 09:26:05.654263 1568 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 7 09:26:05.654897 update_engine[1568]: I20250707 09:26:05.654361 1568 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 09:26:05.654897 update_engine[1568]: I20250707 09:26:05.654384 1568 omaha_request_action.cc:272] Request: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: Jul 7 09:26:05.654897 update_engine[1568]: I20250707 09:26:05.654428 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 09:26:05.659840 update_engine[1568]: I20250707 09:26:05.659632 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 09:26:05.661299 update_engine[1568]: I20250707 09:26:05.660116 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 09:26:05.669723 update_engine[1568]: E20250707 09:26:05.669533 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 09:26:05.669723 update_engine[1568]: I20250707 09:26:05.669658 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 7 09:26:05.676236 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 7 09:26:06.735192 sshd[4318]: Connection closed by 139.178.89.65 port 43624 Jul 7 09:26:06.734779 sshd-session[4316]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:06.742345 systemd-logind[1566]: Session 20 logged out. Waiting for processes to exit. Jul 7 09:26:06.743500 systemd[1]: sshd@17-10.243.72.42:22-139.178.89.65:43624.service: Deactivated successfully. Jul 7 09:26:06.747609 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 09:26:06.750188 systemd-logind[1566]: Removed session 20. Jul 7 09:26:06.894409 systemd[1]: Started sshd@18-10.243.72.42:22-139.178.89.65:43630.service - OpenSSH per-connection server daemon (139.178.89.65:43630). Jul 7 09:26:07.828336 sshd[4328]: Accepted publickey for core from 139.178.89.65 port 43630 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:07.831916 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:07.840558 systemd-logind[1566]: New session 21 of user core. Jul 7 09:26:07.850668 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 09:26:10.641060 sshd[4332]: Connection closed by 139.178.89.65 port 43630 Jul 7 09:26:10.642724 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:10.651049 systemd[1]: sshd@18-10.243.72.42:22-139.178.89.65:43630.service: Deactivated successfully. Jul 7 09:26:10.653836 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 09:26:10.654196 systemd[1]: session-21.scope: Consumed 709ms CPU time, 68.5M memory peak. Jul 7 09:26:10.655622 systemd-logind[1566]: Session 21 logged out. Waiting for processes to exit. Jul 7 09:26:10.657971 systemd-logind[1566]: Removed session 21. Jul 7 09:26:10.801562 systemd[1]: Started sshd@19-10.243.72.42:22-139.178.89.65:56928.service - OpenSSH per-connection server daemon (139.178.89.65:56928). Jul 7 09:26:11.738907 sshd[4350]: Accepted publickey for core from 139.178.89.65 port 56928 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:11.743811 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:11.752231 systemd-logind[1566]: New session 22 of user core. Jul 7 09:26:11.762443 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 09:26:12.640919 sshd[4352]: Connection closed by 139.178.89.65 port 56928 Jul 7 09:26:12.641817 sshd-session[4350]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:12.647832 systemd-logind[1566]: Session 22 logged out. Waiting for processes to exit. Jul 7 09:26:12.648846 systemd[1]: sshd@19-10.243.72.42:22-139.178.89.65:56928.service: Deactivated successfully. Jul 7 09:26:12.651986 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 09:26:12.656518 systemd-logind[1566]: Removed session 22. Jul 7 09:26:12.801270 systemd[1]: Started sshd@20-10.243.72.42:22-139.178.89.65:56940.service - OpenSSH per-connection server daemon (139.178.89.65:56940). Jul 7 09:26:13.713240 sshd[4362]: Accepted publickey for core from 139.178.89.65 port 56940 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:13.714500 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:13.722941 systemd-logind[1566]: New session 23 of user core. Jul 7 09:26:13.728000 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 09:26:14.428636 sshd[4364]: Connection closed by 139.178.89.65 port 56940 Jul 7 09:26:14.428408 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:14.434106 systemd[1]: sshd@20-10.243.72.42:22-139.178.89.65:56940.service: Deactivated successfully. Jul 7 09:26:14.438090 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 09:26:14.439919 systemd-logind[1566]: Session 23 logged out. Waiting for processes to exit. Jul 7 09:26:14.443660 systemd-logind[1566]: Removed session 23. Jul 7 09:26:15.600475 update_engine[1568]: I20250707 09:26:15.600346 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 09:26:15.601160 update_engine[1568]: I20250707 09:26:15.600745 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 09:26:15.601280 update_engine[1568]: I20250707 09:26:15.601238 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 09:26:15.601944 update_engine[1568]: E20250707 09:26:15.601892 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 09:26:15.602017 update_engine[1568]: I20250707 09:26:15.601965 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 7 09:26:19.587437 systemd[1]: Started sshd@21-10.243.72.42:22-139.178.89.65:56944.service - OpenSSH per-connection server daemon (139.178.89.65:56944). Jul 7 09:26:20.491802 sshd[4382]: Accepted publickey for core from 139.178.89.65 port 56944 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:20.494309 sshd-session[4382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:20.503155 systemd-logind[1566]: New session 24 of user core. Jul 7 09:26:20.508335 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 09:26:21.193473 sshd[4384]: Connection closed by 139.178.89.65 port 56944 Jul 7 09:26:21.194578 sshd-session[4382]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:21.200648 systemd[1]: sshd@21-10.243.72.42:22-139.178.89.65:56944.service: Deactivated successfully. Jul 7 09:26:21.204524 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 09:26:21.206288 systemd-logind[1566]: Session 24 logged out. Waiting for processes to exit. Jul 7 09:26:21.209613 systemd-logind[1566]: Removed session 24. Jul 7 09:26:25.604167 update_engine[1568]: I20250707 09:26:25.603605 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 09:26:25.604167 update_engine[1568]: I20250707 09:26:25.603991 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 09:26:25.604846 update_engine[1568]: I20250707 09:26:25.604398 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 09:26:25.604970 update_engine[1568]: E20250707 09:26:25.604921 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 09:26:25.605026 update_engine[1568]: I20250707 09:26:25.604992 1568 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 7 09:26:26.351079 systemd[1]: Started sshd@22-10.243.72.42:22-139.178.89.65:45514.service - OpenSSH per-connection server daemon (139.178.89.65:45514). Jul 7 09:26:27.251321 sshd[4396]: Accepted publickey for core from 139.178.89.65 port 45514 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:27.253368 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:27.260722 systemd-logind[1566]: New session 25 of user core. Jul 7 09:26:27.270296 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 09:26:27.959126 sshd[4398]: Connection closed by 139.178.89.65 port 45514 Jul 7 09:26:27.958455 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:27.964533 systemd-logind[1566]: Session 25 logged out. Waiting for processes to exit. Jul 7 09:26:27.965643 systemd[1]: sshd@22-10.243.72.42:22-139.178.89.65:45514.service: Deactivated successfully. Jul 7 09:26:27.968477 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 09:26:27.971167 systemd-logind[1566]: Removed session 25. Jul 7 09:26:33.113883 systemd[1]: Started sshd@23-10.243.72.42:22-139.178.89.65:53970.service - OpenSSH per-connection server daemon (139.178.89.65:53970). Jul 7 09:26:34.018851 sshd[4411]: Accepted publickey for core from 139.178.89.65 port 53970 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:34.020839 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:34.028143 systemd-logind[1566]: New session 26 of user core. Jul 7 09:26:34.037387 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 09:26:34.747548 sshd[4413]: Connection closed by 139.178.89.65 port 53970 Jul 7 09:26:34.746461 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:34.751402 systemd[1]: sshd@23-10.243.72.42:22-139.178.89.65:53970.service: Deactivated successfully. Jul 7 09:26:34.754418 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 09:26:34.755940 systemd-logind[1566]: Session 26 logged out. Waiting for processes to exit. Jul 7 09:26:34.758345 systemd-logind[1566]: Removed session 26. Jul 7 09:26:34.902660 systemd[1]: Started sshd@24-10.243.72.42:22-139.178.89.65:53982.service - OpenSSH per-connection server daemon (139.178.89.65:53982). Jul 7 09:26:35.599777 update_engine[1568]: I20250707 09:26:35.599637 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 09:26:35.600426 update_engine[1568]: I20250707 09:26:35.600039 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 09:26:35.600576 update_engine[1568]: I20250707 09:26:35.600520 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 09:26:35.601045 update_engine[1568]: E20250707 09:26:35.601001 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 09:26:35.601145 update_engine[1568]: I20250707 09:26:35.601064 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 09:26:35.601145 update_engine[1568]: I20250707 09:26:35.601081 1568 omaha_request_action.cc:617] Omaha request response: Jul 7 09:26:35.601450 update_engine[1568]: E20250707 09:26:35.601368 1568 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 7 09:26:35.601589 update_engine[1568]: I20250707 09:26:35.601504 1568 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 7 09:26:35.601589 update_engine[1568]: I20250707 09:26:35.601521 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 09:26:35.601589 update_engine[1568]: I20250707 09:26:35.601531 1568 update_attempter.cc:306] Processing Done. Jul 7 09:26:35.601589 update_engine[1568]: E20250707 09:26:35.601560 1568 update_attempter.cc:619] Update failed. Jul 7 09:26:35.603400 update_engine[1568]: I20250707 09:26:35.603339 1568 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 7 09:26:35.603400 update_engine[1568]: I20250707 09:26:35.603374 1568 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 7 09:26:35.603400 update_engine[1568]: I20250707 09:26:35.603387 1568 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 7 09:26:35.603839 update_engine[1568]: I20250707 09:26:35.603788 1568 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 7 09:26:35.603897 update_engine[1568]: I20250707 09:26:35.603847 1568 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 7 09:26:35.603897 update_engine[1568]: I20250707 09:26:35.603861 1568 omaha_request_action.cc:272] Request: Jul 7 09:26:35.603897 update_engine[1568]: Jul 7 09:26:35.603897 update_engine[1568]: Jul 7 09:26:35.603897 update_engine[1568]: Jul 7 09:26:35.603897 update_engine[1568]: Jul 7 09:26:35.603897 update_engine[1568]: Jul 7 09:26:35.603897 update_engine[1568]: Jul 7 09:26:35.603897 update_engine[1568]: I20250707 09:26:35.603871 1568 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 7 09:26:35.604378 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 7 09:26:35.604801 update_engine[1568]: I20250707 09:26:35.604523 1568 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 7 09:26:35.604990 update_engine[1568]: I20250707 09:26:35.604951 1568 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 7 09:26:35.605417 update_engine[1568]: E20250707 09:26:35.605369 1568 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 7 09:26:35.605499 update_engine[1568]: I20250707 09:26:35.605430 1568 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 7 09:26:35.605499 update_engine[1568]: I20250707 09:26:35.605446 1568 omaha_request_action.cc:617] Omaha request response: Jul 7 09:26:35.605499 update_engine[1568]: I20250707 09:26:35.605456 1568 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 09:26:35.605499 update_engine[1568]: I20250707 09:26:35.605466 1568 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 7 09:26:35.605499 update_engine[1568]: I20250707 09:26:35.605475 1568 update_attempter.cc:306] Processing Done. Jul 7 09:26:35.605662 update_engine[1568]: I20250707 09:26:35.605485 1568 update_attempter.cc:310] Error event sent. Jul 7 09:26:35.605662 update_engine[1568]: I20250707 09:26:35.605563 1568 update_check_scheduler.cc:74] Next update check in 47m5s Jul 7 09:26:35.605987 locksmithd[1612]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 7 09:26:35.810937 sshd[4425]: Accepted publickey for core from 139.178.89.65 port 53982 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:35.813187 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:35.823214 systemd-logind[1566]: New session 27 of user core. Jul 7 09:26:35.826842 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 09:26:38.017296 containerd[1593]: time="2025-07-07T09:26:38.016256289Z" level=info msg="StopContainer for \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" with timeout 30 (s)" Jul 7 09:26:38.027028 containerd[1593]: time="2025-07-07T09:26:38.026882555Z" level=info msg="Stop container \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" with signal terminated" Jul 7 09:26:38.069478 systemd[1]: cri-containerd-fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9.scope: Deactivated successfully. Jul 7 09:26:38.075691 containerd[1593]: time="2025-07-07T09:26:38.075315114Z" level=info msg="received exit event container_id:\"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" id:\"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" pid:3442 exited_at:{seconds:1751880398 nanos:74759826}" Jul 7 09:26:38.077125 containerd[1593]: time="2025-07-07T09:26:38.077067594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" id:\"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" pid:3442 exited_at:{seconds:1751880398 nanos:74759826}" Jul 7 09:26:38.111314 containerd[1593]: time="2025-07-07T09:26:38.111165222Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 09:26:38.124201 containerd[1593]: time="2025-07-07T09:26:38.124139336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" id:\"ad16b96a8242ead4c778de4d69ab5510e6444d5db341e01f84c4e6d1028dc6fa\" pid:4454 exited_at:{seconds:1751880398 nanos:122249921}" Jul 7 09:26:38.127810 containerd[1593]: time="2025-07-07T09:26:38.127775330Z" level=info msg="StopContainer for \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" with timeout 2 (s)" Jul 7 09:26:38.128674 containerd[1593]: time="2025-07-07T09:26:38.128593569Z" level=info msg="Stop container \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" with signal terminated" Jul 7 09:26:38.137007 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9-rootfs.mount: Deactivated successfully. Jul 7 09:26:38.145407 systemd-networkd[1516]: lxc_health: Link DOWN Jul 7 09:26:38.145418 systemd-networkd[1516]: lxc_health: Lost carrier Jul 7 09:26:38.165122 systemd[1]: cri-containerd-adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0.scope: Deactivated successfully. Jul 7 09:26:38.165589 systemd[1]: cri-containerd-adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0.scope: Consumed 10.093s CPU time, 221M memory peak, 99.6M read from disk, 13.3M written to disk. Jul 7 09:26:38.169210 containerd[1593]: time="2025-07-07T09:26:38.169037166Z" level=info msg="received exit event container_id:\"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" id:\"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" pid:3511 exited_at:{seconds:1751880398 nanos:168240279}" Jul 7 09:26:38.169557 containerd[1593]: time="2025-07-07T09:26:38.169475926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" id:\"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" pid:3511 exited_at:{seconds:1751880398 nanos:168240279}" Jul 7 09:26:38.171313 containerd[1593]: time="2025-07-07T09:26:38.171257855Z" level=info msg="StopContainer for \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" returns successfully" Jul 7 09:26:38.173115 containerd[1593]: time="2025-07-07T09:26:38.173045412Z" level=info msg="StopPodSandbox for \"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\"" Jul 7 09:26:38.180284 containerd[1593]: time="2025-07-07T09:26:38.180178308Z" level=info msg="Container to stop \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:26:38.194666 systemd[1]: cri-containerd-498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593.scope: Deactivated successfully. Jul 7 09:26:38.198703 containerd[1593]: time="2025-07-07T09:26:38.198622571Z" level=info msg="TaskExit event in podsandbox handler container_id:\"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\" id:\"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\" pid:3097 exit_status:137 exited_at:{seconds:1751880398 nanos:197831564}" Jul 7 09:26:38.213718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0-rootfs.mount: Deactivated successfully. Jul 7 09:26:38.233566 containerd[1593]: time="2025-07-07T09:26:38.233407429Z" level=info msg="StopContainer for \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" returns successfully" Jul 7 09:26:38.235666 containerd[1593]: time="2025-07-07T09:26:38.235535657Z" level=info msg="StopPodSandbox for \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\"" Jul 7 09:26:38.235926 containerd[1593]: time="2025-07-07T09:26:38.235858431Z" level=info msg="Container to stop \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:26:38.236044 containerd[1593]: time="2025-07-07T09:26:38.236020100Z" level=info msg="Container to stop \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:26:38.236255 containerd[1593]: time="2025-07-07T09:26:38.236221408Z" level=info msg="Container to stop \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:26:38.236496 containerd[1593]: time="2025-07-07T09:26:38.236423354Z" level=info msg="Container to stop \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:26:38.236692 containerd[1593]: time="2025-07-07T09:26:38.236662744Z" level=info msg="Container to stop \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 09:26:38.252986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593-rootfs.mount: Deactivated successfully. Jul 7 09:26:38.257077 systemd[1]: cri-containerd-4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37.scope: Deactivated successfully. Jul 7 09:26:38.259715 containerd[1593]: time="2025-07-07T09:26:38.259655013Z" level=info msg="shim disconnected" id=498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593 namespace=k8s.io Jul 7 09:26:38.259715 containerd[1593]: time="2025-07-07T09:26:38.259701489Z" level=warning msg="cleaning up after shim disconnected" id=498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593 namespace=k8s.io Jul 7 09:26:38.260351 containerd[1593]: time="2025-07-07T09:26:38.259714613Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 09:26:38.311320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37-rootfs.mount: Deactivated successfully. Jul 7 09:26:38.316123 containerd[1593]: time="2025-07-07T09:26:38.315967660Z" level=info msg="shim disconnected" id=4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37 namespace=k8s.io Jul 7 09:26:38.316123 containerd[1593]: time="2025-07-07T09:26:38.316014718Z" level=warning msg="cleaning up after shim disconnected" id=4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37 namespace=k8s.io Jul 7 09:26:38.316123 containerd[1593]: time="2025-07-07T09:26:38.316027419Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 09:26:38.383354 containerd[1593]: time="2025-07-07T09:26:38.383278810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" id:\"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" pid:3027 exit_status:137 exited_at:{seconds:1751880398 nanos:267050644}" Jul 7 09:26:38.390076 containerd[1593]: time="2025-07-07T09:26:38.388388072Z" level=info msg="received exit event sandbox_id:\"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\" exit_status:137 exited_at:{seconds:1751880398 nanos:197831564}" Jul 7 09:26:38.389137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593-shm.mount: Deactivated successfully. Jul 7 09:26:38.394005 containerd[1593]: time="2025-07-07T09:26:38.393243066Z" level=info msg="TearDown network for sandbox \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" successfully" Jul 7 09:26:38.394005 containerd[1593]: time="2025-07-07T09:26:38.393289398Z" level=info msg="StopPodSandbox for \"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" returns successfully" Jul 7 09:26:38.395255 containerd[1593]: time="2025-07-07T09:26:38.394550831Z" level=info msg="received exit event sandbox_id:\"4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37\" exit_status:137 exited_at:{seconds:1751880398 nanos:267050644}" Jul 7 09:26:38.397261 containerd[1593]: time="2025-07-07T09:26:38.397227107Z" level=info msg="TearDown network for sandbox \"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\" successfully" Jul 7 09:26:38.397387 containerd[1593]: time="2025-07-07T09:26:38.397353663Z" level=info msg="StopPodSandbox for \"498b46b4c0f25719e17c8eddc8caa80d3e453c46ceb0ff5e983a33503d651593\" returns successfully" Jul 7 09:26:38.433390 kubelet[2879]: I0707 09:26:38.433079 2879 scope.go:117] "RemoveContainer" containerID="fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9" Jul 7 09:26:38.437964 containerd[1593]: time="2025-07-07T09:26:38.437247828Z" level=info msg="RemoveContainer for \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\"" Jul 7 09:26:38.448379 containerd[1593]: time="2025-07-07T09:26:38.448323701Z" level=info msg="RemoveContainer for \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" returns successfully" Jul 7 09:26:38.448905 kubelet[2879]: I0707 09:26:38.448863 2879 scope.go:117] "RemoveContainer" containerID="fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9" Jul 7 09:26:38.449240 containerd[1593]: time="2025-07-07T09:26:38.449171100Z" level=error msg="ContainerStatus for \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\": not found" Jul 7 09:26:38.451294 kubelet[2879]: E0707 09:26:38.451242 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\": not found" containerID="fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9" Jul 7 09:26:38.453595 kubelet[2879]: I0707 09:26:38.452953 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9"} err="failed to get container status \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa21b138973dcf7fce490d3037d17a93b8e729cf35422fa1592e5cc39c5167f9\": not found" Jul 7 09:26:38.453595 kubelet[2879]: I0707 09:26:38.453592 2879 scope.go:117] "RemoveContainer" containerID="adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0" Jul 7 09:26:38.459124 containerd[1593]: time="2025-07-07T09:26:38.458927931Z" level=info msg="RemoveContainer for \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\"" Jul 7 09:26:38.473437 containerd[1593]: time="2025-07-07T09:26:38.473360881Z" level=info msg="RemoveContainer for \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" returns successfully" Jul 7 09:26:38.474226 kubelet[2879]: I0707 09:26:38.474013 2879 scope.go:117] "RemoveContainer" containerID="bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383" Jul 7 09:26:38.476790 containerd[1593]: time="2025-07-07T09:26:38.476178413Z" level=info msg="RemoveContainer for \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\"" Jul 7 09:26:38.480884 containerd[1593]: time="2025-07-07T09:26:38.480855762Z" level=info msg="RemoveContainer for \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" returns successfully" Jul 7 09:26:38.481395 kubelet[2879]: I0707 09:26:38.481360 2879 scope.go:117] "RemoveContainer" containerID="f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0" Jul 7 09:26:38.484325 containerd[1593]: time="2025-07-07T09:26:38.484294071Z" level=info msg="RemoveContainer for \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\"" Jul 7 09:26:38.489068 containerd[1593]: time="2025-07-07T09:26:38.489027910Z" level=info msg="RemoveContainer for \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" returns successfully" Jul 7 09:26:38.489292 kubelet[2879]: I0707 09:26:38.489243 2879 scope.go:117] "RemoveContainer" containerID="b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372" Jul 7 09:26:38.493710 containerd[1593]: time="2025-07-07T09:26:38.493608115Z" level=info msg="RemoveContainer for \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\"" Jul 7 09:26:38.498293 containerd[1593]: time="2025-07-07T09:26:38.498258104Z" level=info msg="RemoveContainer for \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" returns successfully" Jul 7 09:26:38.500417 kubelet[2879]: I0707 09:26:38.500323 2879 scope.go:117] "RemoveContainer" containerID="17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4" Jul 7 09:26:38.502328 containerd[1593]: time="2025-07-07T09:26:38.502286475Z" level=info msg="RemoveContainer for \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\"" Jul 7 09:26:38.506315 containerd[1593]: time="2025-07-07T09:26:38.506285784Z" level=info msg="RemoveContainer for \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" returns successfully" Jul 7 09:26:38.507129 containerd[1593]: time="2025-07-07T09:26:38.506973292Z" level=error msg="ContainerStatus for \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\": not found" Jul 7 09:26:38.508174 kubelet[2879]: I0707 09:26:38.506523 2879 scope.go:117] "RemoveContainer" containerID="adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0" Jul 7 09:26:38.508531 kubelet[2879]: E0707 09:26:38.508499 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\": not found" containerID="adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0" Jul 7 09:26:38.508732 kubelet[2879]: I0707 09:26:38.508541 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0"} err="failed to get container status \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\": rpc error: code = NotFound desc = an error occurred when try to find container \"adab9f175d1144e19817bb2e65e9f5a241692b9e330749e6137ecbd25a824ec0\": not found" Jul 7 09:26:38.508732 kubelet[2879]: I0707 09:26:38.508572 2879 scope.go:117] "RemoveContainer" containerID="bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383" Jul 7 09:26:38.509448 containerd[1593]: time="2025-07-07T09:26:38.509192337Z" level=error msg="ContainerStatus for \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\": not found" Jul 7 09:26:38.509527 kubelet[2879]: E0707 09:26:38.509494 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\": not found" containerID="bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383" Jul 7 09:26:38.509579 kubelet[2879]: I0707 09:26:38.509554 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383"} err="failed to get container status \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdd76bd407379764696e533aa5583b51e786e82ea515c9ccdee87a620d7c8383\": not found" Jul 7 09:26:38.509635 kubelet[2879]: I0707 09:26:38.509578 2879 scope.go:117] "RemoveContainer" containerID="f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0" Jul 7 09:26:38.510569 containerd[1593]: time="2025-07-07T09:26:38.510495439Z" level=error msg="ContainerStatus for \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\": not found" Jul 7 09:26:38.511226 kubelet[2879]: E0707 09:26:38.511190 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\": not found" containerID="f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0" Jul 7 09:26:38.511298 kubelet[2879]: I0707 09:26:38.511231 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0"} err="failed to get container status \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"f35f0165655f4d80a769911b662923db7c83458ce0cf6cc52db8a4719878b6d0\": not found" Jul 7 09:26:38.511298 kubelet[2879]: I0707 09:26:38.511253 2879 scope.go:117] "RemoveContainer" containerID="b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372" Jul 7 09:26:38.511862 containerd[1593]: time="2025-07-07T09:26:38.511777822Z" level=error msg="ContainerStatus for \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\": not found" Jul 7 09:26:38.512497 kubelet[2879]: E0707 09:26:38.512032 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\": not found" containerID="b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372" Jul 7 09:26:38.512497 kubelet[2879]: I0707 09:26:38.512077 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372"} err="failed to get container status \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\": rpc error: code = NotFound desc = an error occurred when try to find container \"b514e85c3a34f17638da873f003eb24c365d99216febb80b425590c3bf07b372\": not found" Jul 7 09:26:38.512497 kubelet[2879]: I0707 09:26:38.512270 2879 scope.go:117] "RemoveContainer" containerID="17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4" Jul 7 09:26:38.512873 containerd[1593]: time="2025-07-07T09:26:38.512822842Z" level=error msg="ContainerStatus for \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\": not found" Jul 7 09:26:38.513035 kubelet[2879]: E0707 09:26:38.512988 2879 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\": not found" containerID="17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4" Jul 7 09:26:38.513035 kubelet[2879]: I0707 09:26:38.513021 2879 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4"} err="failed to get container status \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"17416d3eef7a01236f35858e39d9c4c835d9de756458e69b3d39888ce610d2d4\": not found" Jul 7 09:26:38.549792 kubelet[2879]: I0707 09:26:38.549691 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-xtables-lock\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.549792 kubelet[2879]: I0707 09:26:38.549760 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-hubble-tls\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550090 kubelet[2879]: I0707 09:26:38.549827 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-kernel\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550090 kubelet[2879]: I0707 09:26:38.549884 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-bpf-maps\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550090 kubelet[2879]: I0707 09:26:38.549910 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-lib-modules\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550090 kubelet[2879]: I0707 09:26:38.549945 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-etc-cni-netd\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550090 kubelet[2879]: I0707 09:26:38.549975 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fca043e-2b76-4ab2-9847-908371bef67c-cilium-config-path\") pod \"7fca043e-2b76-4ab2-9847-908371bef67c\" (UID: \"7fca043e-2b76-4ab2-9847-908371bef67c\") " Jul 7 09:26:38.550090 kubelet[2879]: I0707 09:26:38.549998 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-net\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550398 kubelet[2879]: I0707 09:26:38.550030 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cni-path\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550398 kubelet[2879]: I0707 09:26:38.550058 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-config-path\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550398 kubelet[2879]: I0707 09:26:38.550157 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-run\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550398 kubelet[2879]: I0707 09:26:38.550187 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-hostproc\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550398 kubelet[2879]: I0707 09:26:38.550211 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-cgroup\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550398 kubelet[2879]: I0707 09:26:38.550241 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrbk9\" (UniqueName: \"kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-kube-api-access-wrbk9\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.550643 kubelet[2879]: I0707 09:26:38.550270 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxkcp\" (UniqueName: \"kubernetes.io/projected/7fca043e-2b76-4ab2-9847-908371bef67c-kube-api-access-pxkcp\") pod \"7fca043e-2b76-4ab2-9847-908371bef67c\" (UID: \"7fca043e-2b76-4ab2-9847-908371bef67c\") " Jul 7 09:26:38.550643 kubelet[2879]: I0707 09:26:38.550298 2879 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90fea754-6b34-4c24-aafd-77026c66f4fe-clustermesh-secrets\") pod \"90fea754-6b34-4c24-aafd-77026c66f4fe\" (UID: \"90fea754-6b34-4c24-aafd-77026c66f4fe\") " Jul 7 09:26:38.551379 kubelet[2879]: I0707 09:26:38.550800 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.551379 kubelet[2879]: I0707 09:26:38.550914 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.552710 kubelet[2879]: I0707 09:26:38.552663 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cni-path" (OuterVolumeSpecName: "cni-path") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.553805 kubelet[2879]: I0707 09:26:38.553756 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.553879 kubelet[2879]: I0707 09:26:38.553810 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-hostproc" (OuterVolumeSpecName: "hostproc") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.553879 kubelet[2879]: I0707 09:26:38.553841 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.554180 kubelet[2879]: I0707 09:26:38.554132 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.554320 kubelet[2879]: I0707 09:26:38.554296 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.554486 kubelet[2879]: I0707 09:26:38.554437 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.555358 kubelet[2879]: I0707 09:26:38.554769 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 09:26:38.564524 kubelet[2879]: I0707 09:26:38.564343 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 09:26:38.565131 kubelet[2879]: I0707 09:26:38.564955 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90fea754-6b34-4c24-aafd-77026c66f4fe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 09:26:38.567010 kubelet[2879]: I0707 09:26:38.566967 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-kube-api-access-wrbk9" (OuterVolumeSpecName: "kube-api-access-wrbk9") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "kube-api-access-wrbk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 09:26:38.568013 kubelet[2879]: I0707 09:26:38.567967 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "90fea754-6b34-4c24-aafd-77026c66f4fe" (UID: "90fea754-6b34-4c24-aafd-77026c66f4fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 09:26:38.568137 kubelet[2879]: I0707 09:26:38.568051 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fca043e-2b76-4ab2-9847-908371bef67c-kube-api-access-pxkcp" (OuterVolumeSpecName: "kube-api-access-pxkcp") pod "7fca043e-2b76-4ab2-9847-908371bef67c" (UID: "7fca043e-2b76-4ab2-9847-908371bef67c"). InnerVolumeSpecName "kube-api-access-pxkcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 09:26:38.570321 kubelet[2879]: I0707 09:26:38.570292 2879 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fca043e-2b76-4ab2-9847-908371bef67c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7fca043e-2b76-4ab2-9847-908371bef67c" (UID: "7fca043e-2b76-4ab2-9847-908371bef67c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 09:26:38.651639 kubelet[2879]: I0707 09:26:38.651582 2879 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-xtables-lock\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.651639 kubelet[2879]: I0707 09:26:38.651638 2879 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-hubble-tls\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.651639 kubelet[2879]: I0707 09:26:38.651655 2879 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-etc-cni-netd\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.651639 kubelet[2879]: I0707 09:26:38.651670 2879 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-kernel\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651691 2879 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-bpf-maps\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651713 2879 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-lib-modules\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651735 2879 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7fca043e-2b76-4ab2-9847-908371bef67c-cilium-config-path\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651749 2879 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-host-proc-sys-net\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651764 2879 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-run\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651778 2879 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-hostproc\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651790 2879 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cni-path\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652008 kubelet[2879]: I0707 09:26:38.651804 2879 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-config-path\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652400 kubelet[2879]: I0707 09:26:38.651880 2879 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/90fea754-6b34-4c24-aafd-77026c66f4fe-cilium-cgroup\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652400 kubelet[2879]: I0707 09:26:38.651895 2879 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/90fea754-6b34-4c24-aafd-77026c66f4fe-clustermesh-secrets\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652400 kubelet[2879]: I0707 09:26:38.651915 2879 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrbk9\" (UniqueName: \"kubernetes.io/projected/90fea754-6b34-4c24-aafd-77026c66f4fe-kube-api-access-wrbk9\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.652400 kubelet[2879]: I0707 09:26:38.651941 2879 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxkcp\" (UniqueName: \"kubernetes.io/projected/7fca043e-2b76-4ab2-9847-908371bef67c-kube-api-access-pxkcp\") on node \"srv-et027.gb1.brightbox.com\" DevicePath \"\"" Jul 7 09:26:38.741143 systemd[1]: Removed slice kubepods-besteffort-pod7fca043e_2b76_4ab2_9847_908371bef67c.slice - libcontainer container kubepods-besteffort-pod7fca043e_2b76_4ab2_9847_908371bef67c.slice. Jul 7 09:26:38.755239 systemd[1]: Removed slice kubepods-burstable-pod90fea754_6b34_4c24_aafd_77026c66f4fe.slice - libcontainer container kubepods-burstable-pod90fea754_6b34_4c24_aafd_77026c66f4fe.slice. Jul 7 09:26:38.756020 systemd[1]: kubepods-burstable-pod90fea754_6b34_4c24_aafd_77026c66f4fe.slice: Consumed 10.236s CPU time, 221.3M memory peak, 100.7M read from disk, 16.6M written to disk. Jul 7 09:26:39.134496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4859c35264268a46d897e55004ad897e7e7e70432bb31f209bb967d3f399ce37-shm.mount: Deactivated successfully. Jul 7 09:26:39.134652 systemd[1]: var-lib-kubelet-pods-90fea754\x2d6b34\x2d4c24\x2daafd\x2d77026c66f4fe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 09:26:39.134760 systemd[1]: var-lib-kubelet-pods-7fca043e\x2d2b76\x2d4ab2\x2d9847\x2d908371bef67c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpxkcp.mount: Deactivated successfully. Jul 7 09:26:39.134898 systemd[1]: var-lib-kubelet-pods-90fea754\x2d6b34\x2d4c24\x2daafd\x2d77026c66f4fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwrbk9.mount: Deactivated successfully. Jul 7 09:26:39.135024 systemd[1]: var-lib-kubelet-pods-90fea754\x2d6b34\x2d4c24\x2daafd\x2d77026c66f4fe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 09:26:39.828814 kubelet[2879]: I0707 09:26:39.828693 2879 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fca043e-2b76-4ab2-9847-908371bef67c" path="/var/lib/kubelet/pods/7fca043e-2b76-4ab2-9847-908371bef67c/volumes" Jul 7 09:26:39.829745 kubelet[2879]: I0707 09:26:39.829706 2879 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90fea754-6b34-4c24-aafd-77026c66f4fe" path="/var/lib/kubelet/pods/90fea754-6b34-4c24-aafd-77026c66f4fe/volumes" Jul 7 09:26:40.056293 sshd[4427]: Connection closed by 139.178.89.65 port 53982 Jul 7 09:26:40.057410 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:40.063568 systemd[1]: sshd@24-10.243.72.42:22-139.178.89.65:53982.service: Deactivated successfully. Jul 7 09:26:40.067443 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 09:26:40.067879 systemd[1]: session-27.scope: Consumed 1.041s CPU time, 27.2M memory peak. Jul 7 09:26:40.069309 systemd-logind[1566]: Session 27 logged out. Waiting for processes to exit. Jul 7 09:26:40.071874 systemd-logind[1566]: Removed session 27. Jul 7 09:26:40.218327 systemd[1]: Started sshd@25-10.243.72.42:22-139.178.89.65:43744.service - OpenSSH per-connection server daemon (139.178.89.65:43744). Jul 7 09:26:41.125822 sshd[4579]: Accepted publickey for core from 139.178.89.65 port 43744 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:41.127875 sshd-session[4579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:41.138819 systemd-logind[1566]: New session 28 of user core. Jul 7 09:26:41.147390 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 09:26:42.644861 kubelet[2879]: E0707 09:26:42.644681 2879 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90fea754-6b34-4c24-aafd-77026c66f4fe" containerName="clean-cilium-state" Jul 7 09:26:42.644861 kubelet[2879]: E0707 09:26:42.644782 2879 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90fea754-6b34-4c24-aafd-77026c66f4fe" containerName="cilium-agent" Jul 7 09:26:42.645949 kubelet[2879]: E0707 09:26:42.645115 2879 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90fea754-6b34-4c24-aafd-77026c66f4fe" containerName="apply-sysctl-overwrites" Jul 7 09:26:42.645949 kubelet[2879]: E0707 09:26:42.645132 2879 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7fca043e-2b76-4ab2-9847-908371bef67c" containerName="cilium-operator" Jul 7 09:26:42.648079 kubelet[2879]: E0707 09:26:42.645146 2879 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90fea754-6b34-4c24-aafd-77026c66f4fe" containerName="mount-cgroup" Jul 7 09:26:42.648079 kubelet[2879]: E0707 09:26:42.646718 2879 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="90fea754-6b34-4c24-aafd-77026c66f4fe" containerName="mount-bpf-fs" Jul 7 09:26:42.648079 kubelet[2879]: I0707 09:26:42.646830 2879 memory_manager.go:354] "RemoveStaleState removing state" podUID="90fea754-6b34-4c24-aafd-77026c66f4fe" containerName="cilium-agent" Jul 7 09:26:42.648079 kubelet[2879]: I0707 09:26:42.646966 2879 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fca043e-2b76-4ab2-9847-908371bef67c" containerName="cilium-operator" Jul 7 09:26:42.667380 systemd[1]: Created slice kubepods-burstable-podffce540c_7cce_4691_acd5_c0729a773352.slice - libcontainer container kubepods-burstable-podffce540c_7cce_4691_acd5_c0729a773352.slice. Jul 7 09:26:42.780754 kubelet[2879]: I0707 09:26:42.780674 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-host-proc-sys-net\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.780754 kubelet[2879]: I0707 09:26:42.780777 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-cilium-run\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781357 kubelet[2879]: I0707 09:26:42.780861 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-host-proc-sys-kernel\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781357 kubelet[2879]: I0707 09:26:42.780893 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffce540c-7cce-4691-acd5-c0729a773352-hubble-tls\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781357 kubelet[2879]: I0707 09:26:42.780922 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-bpf-maps\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781357 kubelet[2879]: I0707 09:26:42.780958 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-hostproc\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781357 kubelet[2879]: I0707 09:26:42.781025 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-cilium-cgroup\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781357 kubelet[2879]: I0707 09:26:42.781067 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-cni-path\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781786 kubelet[2879]: I0707 09:26:42.781100 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-etc-cni-netd\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781786 kubelet[2879]: I0707 09:26:42.781184 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-xtables-lock\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781786 kubelet[2879]: I0707 09:26:42.781225 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffce540c-7cce-4691-acd5-c0729a773352-clustermesh-secrets\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781786 kubelet[2879]: I0707 09:26:42.781252 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffce540c-7cce-4691-acd5-c0729a773352-cilium-config-path\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781786 kubelet[2879]: I0707 09:26:42.781277 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb4jr\" (UniqueName: \"kubernetes.io/projected/ffce540c-7cce-4691-acd5-c0729a773352-kube-api-access-zb4jr\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.781786 kubelet[2879]: I0707 09:26:42.781336 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffce540c-7cce-4691-acd5-c0729a773352-lib-modules\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.782022 kubelet[2879]: I0707 09:26:42.781385 2879 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ffce540c-7cce-4691-acd5-c0729a773352-cilium-ipsec-secrets\") pod \"cilium-gqbbq\" (UID: \"ffce540c-7cce-4691-acd5-c0729a773352\") " pod="kube-system/cilium-gqbbq" Jul 7 09:26:42.805433 sshd[4581]: Connection closed by 139.178.89.65 port 43744 Jul 7 09:26:42.806530 sshd-session[4579]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:42.812569 systemd-logind[1566]: Session 28 logged out. Waiting for processes to exit. Jul 7 09:26:42.813466 systemd[1]: sshd@25-10.243.72.42:22-139.178.89.65:43744.service: Deactivated successfully. Jul 7 09:26:42.817276 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 09:26:42.820533 systemd-logind[1566]: Removed session 28. Jul 7 09:26:42.963208 systemd[1]: Started sshd@26-10.243.72.42:22-139.178.89.65:43754.service - OpenSSH per-connection server daemon (139.178.89.65:43754). Jul 7 09:26:42.987930 containerd[1593]: time="2025-07-07T09:26:42.987838043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqbbq,Uid:ffce540c-7cce-4691-acd5-c0729a773352,Namespace:kube-system,Attempt:0,}" Jul 7 09:26:43.013904 containerd[1593]: time="2025-07-07T09:26:43.013804352Z" level=info msg="connecting to shim 93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665" address="unix:///run/containerd/s/e832e0d2d9d119d28787703ca9126a1fb114397c6b703c25768120f9d75ceddf" namespace=k8s.io protocol=ttrpc version=3 Jul 7 09:26:43.053127 kubelet[2879]: E0707 09:26:43.053024 2879 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 09:26:43.067323 systemd[1]: Started cri-containerd-93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665.scope - libcontainer container 93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665. Jul 7 09:26:43.115068 containerd[1593]: time="2025-07-07T09:26:43.114999698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gqbbq,Uid:ffce540c-7cce-4691-acd5-c0729a773352,Namespace:kube-system,Attempt:0,} returns sandbox id \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\"" Jul 7 09:26:43.121166 containerd[1593]: time="2025-07-07T09:26:43.121078225Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 09:26:43.133739 containerd[1593]: time="2025-07-07T09:26:43.133663017Z" level=info msg="Container d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:26:43.143295 containerd[1593]: time="2025-07-07T09:26:43.142936424Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9\"" Jul 7 09:26:43.145264 containerd[1593]: time="2025-07-07T09:26:43.145231627Z" level=info msg="StartContainer for \"d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9\"" Jul 7 09:26:43.146627 containerd[1593]: time="2025-07-07T09:26:43.146593994Z" level=info msg="connecting to shim d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9" address="unix:///run/containerd/s/e832e0d2d9d119d28787703ca9126a1fb114397c6b703c25768120f9d75ceddf" protocol=ttrpc version=3 Jul 7 09:26:43.184901 systemd[1]: Started cri-containerd-d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9.scope - libcontainer container d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9. Jul 7 09:26:43.243769 containerd[1593]: time="2025-07-07T09:26:43.243071508Z" level=info msg="StartContainer for \"d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9\" returns successfully" Jul 7 09:26:43.261274 systemd[1]: cri-containerd-d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9.scope: Deactivated successfully. Jul 7 09:26:43.262370 systemd[1]: cri-containerd-d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9.scope: Consumed 35ms CPU time, 9.7M memory peak, 3.3M read from disk. Jul 7 09:26:43.263620 containerd[1593]: time="2025-07-07T09:26:43.263495123Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9\" id:\"d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9\" pid:4658 exited_at:{seconds:1751880403 nanos:262257056}" Jul 7 09:26:43.264862 containerd[1593]: time="2025-07-07T09:26:43.264728884Z" level=info msg="received exit event container_id:\"d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9\" id:\"d469a7e594c4a98d54c052d60ba5c18597814ab221cdb67d3ea4e70cb85321a9\" pid:4658 exited_at:{seconds:1751880403 nanos:262257056}" Jul 7 09:26:43.477404 containerd[1593]: time="2025-07-07T09:26:43.475763610Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 09:26:43.486625 containerd[1593]: time="2025-07-07T09:26:43.486562912Z" level=info msg="Container 6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:26:43.494642 containerd[1593]: time="2025-07-07T09:26:43.494313326Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b\"" Jul 7 09:26:43.501631 containerd[1593]: time="2025-07-07T09:26:43.497250079Z" level=info msg="StartContainer for \"6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b\"" Jul 7 09:26:43.501631 containerd[1593]: time="2025-07-07T09:26:43.499341258Z" level=info msg="connecting to shim 6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b" address="unix:///run/containerd/s/e832e0d2d9d119d28787703ca9126a1fb114397c6b703c25768120f9d75ceddf" protocol=ttrpc version=3 Jul 7 09:26:43.539539 systemd[1]: Started cri-containerd-6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b.scope - libcontainer container 6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b. Jul 7 09:26:43.592403 containerd[1593]: time="2025-07-07T09:26:43.592255995Z" level=info msg="StartContainer for \"6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b\" returns successfully" Jul 7 09:26:43.617394 systemd[1]: cri-containerd-6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b.scope: Deactivated successfully. Jul 7 09:26:43.617826 systemd[1]: cri-containerd-6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b.scope: Consumed 30ms CPU time, 7.5M memory peak, 2.2M read from disk. Jul 7 09:26:43.620331 containerd[1593]: time="2025-07-07T09:26:43.620272971Z" level=info msg="received exit event container_id:\"6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b\" id:\"6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b\" pid:4703 exited_at:{seconds:1751880403 nanos:619862137}" Jul 7 09:26:43.621253 containerd[1593]: time="2025-07-07T09:26:43.620888735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b\" id:\"6211dbd33c76341d6471799d935b0e752fe5849ac254197023649c720c423a3b\" pid:4703 exited_at:{seconds:1751880403 nanos:619862137}" Jul 7 09:26:43.886631 sshd[4595]: Accepted publickey for core from 139.178.89.65 port 43754 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:43.887976 sshd-session[4595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:43.894511 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2550519948.mount: Deactivated successfully. Jul 7 09:26:43.898322 systemd-logind[1566]: New session 29 of user core. Jul 7 09:26:43.904342 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 09:26:44.483536 containerd[1593]: time="2025-07-07T09:26:44.483477245Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 09:26:44.503452 containerd[1593]: time="2025-07-07T09:26:44.500536611Z" level=info msg="Container ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:26:44.505159 sshd[4734]: Connection closed by 139.178.89.65 port 43754 Jul 7 09:26:44.510539 sshd-session[4595]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:44.517260 containerd[1593]: time="2025-07-07T09:26:44.517212321Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119\"" Jul 7 09:26:44.519773 systemd[1]: sshd@26-10.243.72.42:22-139.178.89.65:43754.service: Deactivated successfully. Jul 7 09:26:44.520636 systemd-logind[1566]: Session 29 logged out. Waiting for processes to exit. Jul 7 09:26:44.522523 containerd[1593]: time="2025-07-07T09:26:44.521273807Z" level=info msg="StartContainer for \"ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119\"" Jul 7 09:26:44.527805 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 09:26:44.533642 systemd-logind[1566]: Removed session 29. Jul 7 09:26:44.536284 containerd[1593]: time="2025-07-07T09:26:44.536216010Z" level=info msg="connecting to shim ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119" address="unix:///run/containerd/s/e832e0d2d9d119d28787703ca9126a1fb114397c6b703c25768120f9d75ceddf" protocol=ttrpc version=3 Jul 7 09:26:44.570465 systemd[1]: Started cri-containerd-ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119.scope - libcontainer container ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119. Jul 7 09:26:44.638905 containerd[1593]: time="2025-07-07T09:26:44.638815933Z" level=info msg="StartContainer for \"ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119\" returns successfully" Jul 7 09:26:44.649744 containerd[1593]: time="2025-07-07T09:26:44.649662999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119\" id:\"ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119\" pid:4753 exited_at:{seconds:1751880404 nanos:649186009}" Jul 7 09:26:44.649744 containerd[1593]: time="2025-07-07T09:26:44.649740490Z" level=info msg="received exit event container_id:\"ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119\" id:\"ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119\" pid:4753 exited_at:{seconds:1751880404 nanos:649186009}" Jul 7 09:26:44.660578 systemd[1]: cri-containerd-ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119.scope: Deactivated successfully. Jul 7 09:26:44.661305 systemd[1]: cri-containerd-ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119.scope: Consumed 42ms CPU time, 5.9M memory peak, 1.1M read from disk. Jul 7 09:26:44.667602 systemd[1]: Started sshd@27-10.243.72.42:22-139.178.89.65:43756.service - OpenSSH per-connection server daemon (139.178.89.65:43756). Jul 7 09:26:44.706180 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad336af986121575c19bfda70fb1ab0ea9327dc307f7c0af9614d1caf3faf119-rootfs.mount: Deactivated successfully. Jul 7 09:26:45.494139 containerd[1593]: time="2025-07-07T09:26:45.493349633Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 09:26:45.514277 containerd[1593]: time="2025-07-07T09:26:45.514213849Z" level=info msg="Container f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:26:45.525259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4138568745.mount: Deactivated successfully. Jul 7 09:26:45.527976 containerd[1593]: time="2025-07-07T09:26:45.527931044Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196\"" Jul 7 09:26:45.529296 containerd[1593]: time="2025-07-07T09:26:45.529260438Z" level=info msg="StartContainer for \"f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196\"" Jul 7 09:26:45.532976 containerd[1593]: time="2025-07-07T09:26:45.532736322Z" level=info msg="connecting to shim f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196" address="unix:///run/containerd/s/e832e0d2d9d119d28787703ca9126a1fb114397c6b703c25768120f9d75ceddf" protocol=ttrpc version=3 Jul 7 09:26:45.577352 systemd[1]: Started cri-containerd-f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196.scope - libcontainer container f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196. Jul 7 09:26:45.590569 sshd[4770]: Accepted publickey for core from 139.178.89.65 port 43756 ssh2: RSA SHA256:eKfS1YMivy3ccQy1mPE3XRaX++qubulWdfUjIr34/68 Jul 7 09:26:45.592733 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 09:26:45.600873 systemd-logind[1566]: New session 30 of user core. Jul 7 09:26:45.608328 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 7 09:26:45.647174 containerd[1593]: time="2025-07-07T09:26:45.645670702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196\" id:\"f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196\" pid:4799 exited_at:{seconds:1751880405 nanos:644440053}" Jul 7 09:26:45.645954 systemd[1]: cri-containerd-f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196.scope: Deactivated successfully. Jul 7 09:26:45.650741 containerd[1593]: time="2025-07-07T09:26:45.650667062Z" level=info msg="received exit event container_id:\"f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196\" id:\"f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196\" pid:4799 exited_at:{seconds:1751880405 nanos:644440053}" Jul 7 09:26:45.663494 containerd[1593]: time="2025-07-07T09:26:45.663085820Z" level=info msg="StartContainer for \"f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196\" returns successfully" Jul 7 09:26:45.685974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f86869b33e5ad121d83fbbfca74ce63243730ebc6b403e41a88b69f375ae1196-rootfs.mount: Deactivated successfully. Jul 7 09:26:46.504646 containerd[1593]: time="2025-07-07T09:26:46.503900673Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 09:26:46.526554 containerd[1593]: time="2025-07-07T09:26:46.526478798Z" level=info msg="Container b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad: CDI devices from CRI Config.CDIDevices: []" Jul 7 09:26:46.543940 containerd[1593]: time="2025-07-07T09:26:46.542687691Z" level=info msg="CreateContainer within sandbox \"93aed954f6720bb80016f26716dda103c2a802e706aad4d25c1b37e1c7cb3665\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\"" Jul 7 09:26:46.547436 containerd[1593]: time="2025-07-07T09:26:46.547353977Z" level=info msg="StartContainer for \"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\"" Jul 7 09:26:46.549174 containerd[1593]: time="2025-07-07T09:26:46.548947098Z" level=info msg="connecting to shim b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad" address="unix:///run/containerd/s/e832e0d2d9d119d28787703ca9126a1fb114397c6b703c25768120f9d75ceddf" protocol=ttrpc version=3 Jul 7 09:26:46.594868 systemd[1]: Started cri-containerd-b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad.scope - libcontainer container b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad. Jul 7 09:26:46.658259 containerd[1593]: time="2025-07-07T09:26:46.658071375Z" level=info msg="StartContainer for \"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\" returns successfully" Jul 7 09:26:46.794787 containerd[1593]: time="2025-07-07T09:26:46.793557284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\" id:\"9b4c4985c7bc368939dcda18a0563ec8153d7c5eb8a4676ad56f9f49318f6de9\" pid:4872 exited_at:{seconds:1751880406 nanos:791428570}" Jul 7 09:26:47.411932 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jul 7 09:26:47.537075 kubelet[2879]: I0707 09:26:47.536795 2879 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gqbbq" podStartSLOduration=5.536775305 podStartE2EDuration="5.536775305s" podCreationTimestamp="2025-07-07 09:26:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 09:26:47.53599957 +0000 UTC m=+159.946107989" watchObservedRunningTime="2025-07-07 09:26:47.536775305 +0000 UTC m=+159.946883702" Jul 7 09:26:48.642003 containerd[1593]: time="2025-07-07T09:26:48.641880050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\" id:\"3ed5b1f4384e7a145c390981d483b82353be09498707ccc480c10a299a259473\" pid:4947 exit_status:1 exited_at:{seconds:1751880408 nanos:641378481}" Jul 7 09:26:50.084961 systemd[1]: Started sshd@28-10.243.72.42:22-45.78.192.226:50568.service - OpenSSH per-connection server daemon (45.78.192.226:50568). Jul 7 09:26:50.859500 containerd[1593]: time="2025-07-07T09:26:50.859274118Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\" id:\"84469571e0042ed58ad9dae7edbcd5fa48556d6c91809e3664af3553ed869487\" pid:5301 exit_status:1 exited_at:{seconds:1751880410 nanos:857501200}" Jul 7 09:26:51.191500 systemd-networkd[1516]: lxc_health: Link UP Jul 7 09:26:51.193855 systemd-networkd[1516]: lxc_health: Gained carrier Jul 7 09:26:52.324438 systemd-networkd[1516]: lxc_health: Gained IPv6LL Jul 7 09:26:52.913142 sshd[5155]: Connection closed by authenticating user root 45.78.192.226 port 50568 [preauth] Jul 7 09:26:52.919336 systemd[1]: sshd@28-10.243.72.42:22-45.78.192.226:50568.service: Deactivated successfully. Jul 7 09:26:53.117786 systemd[1]: Started sshd@29-10.243.72.42:22-45.78.192.226:50574.service - OpenSSH per-connection server daemon (45.78.192.226:50574). Jul 7 09:26:53.161003 containerd[1593]: time="2025-07-07T09:26:53.160915984Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\" id:\"f19aeda35cc17a73a8ac5b837a1f3eec255f8710cdafa3b443ecdcf35ca47a5b\" pid:5437 exited_at:{seconds:1751880413 nanos:159759080}" Jul 7 09:26:54.203260 sshd[5448]: Invalid user admin from 45.78.192.226 port 50574 Jul 7 09:26:54.392868 sshd[5448]: Connection closed by invalid user admin 45.78.192.226 port 50574 [preauth] Jul 7 09:26:54.395675 systemd[1]: sshd@29-10.243.72.42:22-45.78.192.226:50574.service: Deactivated successfully. Jul 7 09:26:54.594912 systemd[1]: Started sshd@30-10.243.72.42:22-45.78.192.226:50582.service - OpenSSH per-connection server daemon (45.78.192.226:50582). Jul 7 09:26:55.330998 containerd[1593]: time="2025-07-07T09:26:55.330693774Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\" id:\"7dcb27493c64323a2b3832354756f7f8fdc24d2051a5148ebe54eb02ef536181\" pid:5473 exited_at:{seconds:1751880415 nanos:330180924}" Jul 7 09:26:55.389392 sshd[5455]: Invalid user esuser from 45.78.192.226 port 50582 Jul 7 09:26:55.579111 sshd[5455]: Connection closed by invalid user esuser 45.78.192.226 port 50582 [preauth] Jul 7 09:26:55.581299 systemd[1]: sshd@30-10.243.72.42:22-45.78.192.226:50582.service: Deactivated successfully. Jul 7 09:26:55.783308 systemd[1]: Started sshd@31-10.243.72.42:22-45.78.192.226:50594.service - OpenSSH per-connection server daemon (45.78.192.226:50594). Jul 7 09:26:56.781359 sshd[5487]: Connection closed by authenticating user root 45.78.192.226 port 50594 [preauth] Jul 7 09:26:56.776147 systemd[1]: sshd@31-10.243.72.42:22-45.78.192.226:50594.service: Deactivated successfully. Jul 7 09:26:57.501891 containerd[1593]: time="2025-07-07T09:26:57.501584832Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6f777faaea7eca84cafdac60b0c15f138678196f6963a199c452e37be812bad\" id:\"00a5b4a994f33d51d6e1072cc46bef6cf2b812c363aa5eef91b4708aac1afb1f\" pid:5511 exited_at:{seconds:1751880417 nanos:500978010}" Jul 7 09:26:57.653347 sshd[4806]: Connection closed by 139.178.89.65 port 43756 Jul 7 09:26:57.655080 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Jul 7 09:26:57.662460 systemd[1]: sshd@27-10.243.72.42:22-139.178.89.65:43756.service: Deactivated successfully. Jul 7 09:26:57.665665 systemd[1]: session-30.scope: Deactivated successfully. Jul 7 09:26:57.667887 systemd-logind[1566]: Session 30 logged out. Waiting for processes to exit. Jul 7 09:26:57.670511 systemd-logind[1566]: Removed session 30. Jul 7 09:27:00.015422 systemd[1]: Started sshd@32-10.243.72.42:22-45.78.192.226:50606.service - OpenSSH per-connection server daemon (45.78.192.226:50606).