May 27 04:23:48.971674 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 27 01:09:43 -00 2025 May 27 04:23:48.971739 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 04:23:48.971766 kernel: BIOS-provided physical RAM map: May 27 04:23:48.971777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 27 04:23:48.971787 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 27 04:23:48.971797 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 27 04:23:48.971813 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable May 27 04:23:48.971825 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved May 27 04:23:48.971835 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 27 04:23:48.971846 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 27 04:23:48.971861 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 27 04:23:48.971872 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 27 04:23:48.971882 kernel: NX (Execute Disable) protection: active May 27 04:23:48.971893 kernel: APIC: Static calls initialized May 27 04:23:48.971905 kernel: SMBIOS 2.8 present. May 27 04:23:48.971917 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 May 27 04:23:48.971933 kernel: DMI: Memory slots populated: 1/1 May 27 04:23:48.971944 kernel: Hypervisor detected: KVM May 27 04:23:48.971956 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 27 04:23:48.971985 kernel: kvm-clock: using sched offset of 5599366043 cycles May 27 04:23:48.972001 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 27 04:23:48.972013 kernel: tsc: Detected 2499.998 MHz processor May 27 04:23:48.972025 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 27 04:23:48.972037 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 27 04:23:48.972049 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 May 27 04:23:48.972067 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 27 04:23:48.972078 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 27 04:23:48.972090 kernel: Using GB pages for direct mapping May 27 04:23:48.972101 kernel: ACPI: Early table checksum verification disabled May 27 04:23:48.972113 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) May 27 04:23:48.972125 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 04:23:48.972136 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 04:23:48.972148 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 04:23:48.972159 kernel: ACPI: FACS 0x000000007FFDFD40 000040 May 27 04:23:48.972176 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 04:23:48.972188 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 04:23:48.972199 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 04:23:48.972211 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 04:23:48.972223 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] May 27 04:23:48.972235 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] May 27 04:23:48.972252 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] May 27 04:23:48.972269 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] May 27 04:23:48.972281 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] May 27 04:23:48.972293 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] May 27 04:23:48.972305 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] May 27 04:23:48.972317 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 27 04:23:48.972329 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 27 04:23:48.972341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug May 27 04:23:48.972357 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] May 27 04:23:48.972370 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] May 27 04:23:48.972382 kernel: Zone ranges: May 27 04:23:48.972394 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 27 04:23:48.972406 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] May 27 04:23:48.972418 kernel: Normal empty May 27 04:23:48.972430 kernel: Device empty May 27 04:23:48.972442 kernel: Movable zone start for each node May 27 04:23:48.972454 kernel: Early memory node ranges May 27 04:23:48.972470 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 27 04:23:48.972482 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] May 27 04:23:48.972494 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] May 27 04:23:48.972506 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 27 04:23:48.972518 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 27 04:23:48.972530 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges May 27 04:23:48.972542 kernel: ACPI: PM-Timer IO Port: 0x608 May 27 04:23:48.972554 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 27 04:23:48.972566 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 27 04:23:48.972578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 27 04:23:48.972595 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 27 04:23:48.972607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 27 04:23:48.972630 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 27 04:23:48.972642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 27 04:23:48.972654 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 27 04:23:48.972666 kernel: TSC deadline timer available May 27 04:23:48.972678 kernel: CPU topo: Max. logical packages: 16 May 27 04:23:48.972690 kernel: CPU topo: Max. logical dies: 16 May 27 04:23:48.972703 kernel: CPU topo: Max. dies per package: 1 May 27 04:23:48.972720 kernel: CPU topo: Max. threads per core: 1 May 27 04:23:48.972732 kernel: CPU topo: Num. cores per package: 1 May 27 04:23:48.972744 kernel: CPU topo: Num. threads per package: 1 May 27 04:23:48.972781 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs May 27 04:23:48.972796 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 27 04:23:48.972808 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 27 04:23:48.972821 kernel: Booting paravirtualized kernel on KVM May 27 04:23:48.972833 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 27 04:23:48.972845 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 May 27 04:23:48.972863 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 May 27 04:23:48.972876 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 May 27 04:23:48.972888 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 May 27 04:23:48.972900 kernel: kvm-guest: PV spinlocks enabled May 27 04:23:48.972912 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 27 04:23:48.972926 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 04:23:48.972938 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 04:23:48.972950 kernel: random: crng init done May 27 04:23:48.972967 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 04:23:48.972999 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 27 04:23:48.973022 kernel: Fallback order for Node 0: 0 May 27 04:23:48.973035 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 May 27 04:23:48.973047 kernel: Policy zone: DMA32 May 27 04:23:48.973059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 04:23:48.973071 kernel: software IO TLB: area num 16. May 27 04:23:48.973083 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 May 27 04:23:48.973096 kernel: Kernel/User page tables isolation: enabled May 27 04:23:48.973114 kernel: ftrace: allocating 40081 entries in 157 pages May 27 04:23:48.973126 kernel: ftrace: allocated 157 pages with 5 groups May 27 04:23:48.973138 kernel: Dynamic Preempt: voluntary May 27 04:23:48.973151 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 04:23:48.973164 kernel: rcu: RCU event tracing is enabled. May 27 04:23:48.973176 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. May 27 04:23:48.973188 kernel: Trampoline variant of Tasks RCU enabled. May 27 04:23:48.973201 kernel: Rude variant of Tasks RCU enabled. May 27 04:23:48.973213 kernel: Tracing variant of Tasks RCU enabled. May 27 04:23:48.973225 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 04:23:48.973242 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 May 27 04:23:48.973254 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 27 04:23:48.973267 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 27 04:23:48.973279 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. May 27 04:23:48.973312 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 May 27 04:23:48.973325 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 04:23:48.973352 kernel: Console: colour VGA+ 80x25 May 27 04:23:48.973365 kernel: printk: legacy console [tty0] enabled May 27 04:23:48.973378 kernel: printk: legacy console [ttyS0] enabled May 27 04:23:48.973390 kernel: ACPI: Core revision 20240827 May 27 04:23:48.973403 kernel: APIC: Switch to symmetric I/O mode setup May 27 04:23:48.973420 kernel: x2apic enabled May 27 04:23:48.973433 kernel: APIC: Switched APIC routing to: physical x2apic May 27 04:23:48.973446 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 27 04:23:48.973459 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) May 27 04:23:48.973472 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 27 04:23:48.973489 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 27 04:23:48.973502 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 27 04:23:48.973514 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 27 04:23:48.973526 kernel: Spectre V2 : Mitigation: Retpolines May 27 04:23:48.973539 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 27 04:23:48.973551 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 27 04:23:48.973564 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 27 04:23:48.973576 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 27 04:23:48.973589 kernel: MDS: Mitigation: Clear CPU buffers May 27 04:23:48.973601 kernel: MMIO Stale Data: Unknown: No mitigations May 27 04:23:48.973636 kernel: SRBDS: Unknown: Dependent on hypervisor status May 27 04:23:48.973659 kernel: ITS: Mitigation: Aligned branch/return thunks May 27 04:23:48.973672 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 27 04:23:48.973685 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 27 04:23:48.973697 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 27 04:23:48.973710 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 27 04:23:48.973722 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 27 04:23:48.973735 kernel: Freeing SMP alternatives memory: 32K May 27 04:23:48.973747 kernel: pid_max: default: 32768 minimum: 301 May 27 04:23:48.973759 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 04:23:48.973772 kernel: landlock: Up and running. May 27 04:23:48.973784 kernel: SELinux: Initializing. May 27 04:23:48.973802 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 04:23:48.973814 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 27 04:23:48.973827 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) May 27 04:23:48.973840 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. May 27 04:23:48.973853 kernel: signal: max sigframe size: 1776 May 27 04:23:48.973865 kernel: rcu: Hierarchical SRCU implementation. May 27 04:23:48.973878 kernel: rcu: Max phase no-delay instances is 400. May 27 04:23:48.973891 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level May 27 04:23:48.973904 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 27 04:23:48.973921 kernel: smp: Bringing up secondary CPUs ... May 27 04:23:48.973934 kernel: smpboot: x86: Booting SMP configuration: May 27 04:23:48.973947 kernel: .... node #0, CPUs: #1 May 27 04:23:48.973959 kernel: smp: Brought up 1 node, 2 CPUs May 27 04:23:48.974012 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) May 27 04:23:48.974028 kernel: Memory: 1895676K/2096616K available (14336K kernel code, 2430K rwdata, 9952K rodata, 54416K init, 2552K bss, 194924K reserved, 0K cma-reserved) May 27 04:23:48.974041 kernel: devtmpfs: initialized May 27 04:23:48.974054 kernel: x86/mm: Memory block size: 128MB May 27 04:23:48.974067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 04:23:48.974086 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) May 27 04:23:48.974099 kernel: pinctrl core: initialized pinctrl subsystem May 27 04:23:48.974112 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 04:23:48.974124 kernel: audit: initializing netlink subsys (disabled) May 27 04:23:48.974137 kernel: audit: type=2000 audit(1748319825.696:1): state=initialized audit_enabled=0 res=1 May 27 04:23:48.974150 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 04:23:48.974162 kernel: thermal_sys: Registered thermal governor 'user_space' May 27 04:23:48.974175 kernel: cpuidle: using governor menu May 27 04:23:48.974188 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 04:23:48.974205 kernel: dca service started, version 1.12.1 May 27 04:23:48.974218 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] May 27 04:23:48.974231 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry May 27 04:23:48.974244 kernel: PCI: Using configuration type 1 for base access May 27 04:23:48.974257 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 27 04:23:48.974270 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 04:23:48.974282 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 27 04:23:48.974295 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 04:23:48.974308 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 27 04:23:48.974325 kernel: ACPI: Added _OSI(Module Device) May 27 04:23:48.974338 kernel: ACPI: Added _OSI(Processor Device) May 27 04:23:48.974351 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 04:23:48.974364 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 04:23:48.974376 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 04:23:48.974389 kernel: ACPI: Interpreter enabled May 27 04:23:48.974401 kernel: ACPI: PM: (supports S0 S5) May 27 04:23:48.974414 kernel: ACPI: Using IOAPIC for interrupt routing May 27 04:23:48.974427 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 27 04:23:48.974444 kernel: PCI: Using E820 reservations for host bridge windows May 27 04:23:48.974456 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 27 04:23:48.974469 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 04:23:48.974865 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 04:23:48.975063 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 04:23:48.975227 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 04:23:48.975247 kernel: PCI host bridge to bus 0000:00 May 27 04:23:48.975427 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 27 04:23:48.975585 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 27 04:23:48.975748 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 27 04:23:48.975894 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] May 27 04:23:48.976095 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 27 04:23:48.976244 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] May 27 04:23:48.976389 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 04:23:48.976592 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint May 27 04:23:48.976807 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint May 27 04:23:48.976988 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] May 27 04:23:48.977173 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] May 27 04:23:48.977335 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] May 27 04:23:48.977495 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 27 04:23:48.977705 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.977877 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] May 27 04:23:48.978085 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 27 04:23:48.978249 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 27 04:23:48.978411 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 27 04:23:48.978591 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.978770 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] May 27 04:23:48.978931 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 27 04:23:48.981544 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 27 04:23:48.981732 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 27 04:23:48.981921 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.982170 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] May 27 04:23:48.982336 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 27 04:23:48.982500 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 27 04:23:48.982679 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 27 04:23:48.982860 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.983056 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] May 27 04:23:48.983224 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 27 04:23:48.983384 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 27 04:23:48.983546 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 27 04:23:48.983735 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.983898 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] May 27 04:23:48.985611 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 27 04:23:48.985799 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 27 04:23:48.985964 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 27 04:23:48.986160 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.986324 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] May 27 04:23:48.986486 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 27 04:23:48.986660 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 27 04:23:48.986830 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 27 04:23:48.987040 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.987209 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] May 27 04:23:48.987370 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 27 04:23:48.987530 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 27 04:23:48.987704 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 27 04:23:48.987883 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port May 27 04:23:48.994127 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] May 27 04:23:48.994318 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 27 04:23:48.994488 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 27 04:23:48.994672 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 27 04:23:48.994855 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint May 27 04:23:48.995064 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] May 27 04:23:48.995241 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] May 27 04:23:48.995405 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] May 27 04:23:48.995568 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] May 27 04:23:48.995758 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint May 27 04:23:48.995924 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] May 27 04:23:48.996139 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] May 27 04:23:48.996302 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] May 27 04:23:48.996484 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint May 27 04:23:48.996662 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 27 04:23:48.996845 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint May 27 04:23:48.997027 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] May 27 04:23:48.997191 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] May 27 04:23:48.997362 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint May 27 04:23:48.997524 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] May 27 04:23:48.997727 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge May 27 04:23:48.997897 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] May 27 04:23:49.001207 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 27 04:23:49.001397 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 27 04:23:49.001567 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 27 04:23:49.001764 kernel: pci_bus 0000:02: extended config space not accessible May 27 04:23:49.001998 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint May 27 04:23:49.002211 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] May 27 04:23:49.002383 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 27 04:23:49.002563 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint May 27 04:23:49.002748 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] May 27 04:23:49.002915 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 27 04:23:49.003122 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint May 27 04:23:49.003301 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] May 27 04:23:49.003474 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 27 04:23:49.003655 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 27 04:23:49.003822 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 27 04:23:49.004044 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 27 04:23:49.004214 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 27 04:23:49.004378 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 27 04:23:49.004406 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 27 04:23:49.004420 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 27 04:23:49.004433 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 27 04:23:49.004446 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 27 04:23:49.004459 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 27 04:23:49.004472 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 27 04:23:49.004484 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 27 04:23:49.004497 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 27 04:23:49.004515 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 27 04:23:49.004528 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 27 04:23:49.004541 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 27 04:23:49.004554 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 27 04:23:49.004567 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 27 04:23:49.004580 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 27 04:23:49.004593 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 27 04:23:49.004606 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 27 04:23:49.004631 kernel: iommu: Default domain type: Translated May 27 04:23:49.004650 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 27 04:23:49.004663 kernel: PCI: Using ACPI for IRQ routing May 27 04:23:49.004676 kernel: PCI: pci_cache_line_size set to 64 bytes May 27 04:23:49.004688 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 27 04:23:49.004701 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] May 27 04:23:49.004863 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 27 04:23:49.005047 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 27 04:23:49.005210 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 27 04:23:49.005230 kernel: vgaarb: loaded May 27 04:23:49.005250 kernel: clocksource: Switched to clocksource kvm-clock May 27 04:23:49.005263 kernel: VFS: Disk quotas dquot_6.6.0 May 27 04:23:49.005277 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 04:23:49.005289 kernel: pnp: PnP ACPI init May 27 04:23:49.005459 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 27 04:23:49.005481 kernel: pnp: PnP ACPI: found 5 devices May 27 04:23:49.005494 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 27 04:23:49.005507 kernel: NET: Registered PF_INET protocol family May 27 04:23:49.005527 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 04:23:49.005540 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 27 04:23:49.005553 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 04:23:49.005566 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 27 04:23:49.005579 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 27 04:23:49.005592 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 27 04:23:49.005604 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 04:23:49.005632 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 27 04:23:49.005651 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 04:23:49.005665 kernel: NET: Registered PF_XDP protocol family May 27 04:23:49.005827 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 May 27 04:23:49.006004 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 27 04:23:49.006168 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 27 04:23:49.006329 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 27 04:23:49.006490 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 27 04:23:49.006666 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 27 04:23:49.006829 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 27 04:23:49.007017 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 27 04:23:49.007180 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned May 27 04:23:49.007341 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned May 27 04:23:49.007502 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned May 27 04:23:49.007676 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned May 27 04:23:49.007839 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned May 27 04:23:49.008036 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned May 27 04:23:49.008207 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned May 27 04:23:49.008367 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned May 27 04:23:49.008536 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] May 27 04:23:49.008745 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] May 27 04:23:49.008910 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] May 27 04:23:49.009113 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 27 04:23:49.009278 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] May 27 04:23:49.009439 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 27 04:23:49.009601 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] May 27 04:23:49.009786 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 27 04:23:49.009948 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] May 27 04:23:49.010128 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 27 04:23:49.010306 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] May 27 04:23:49.010480 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 27 04:23:49.010657 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] May 27 04:23:49.010820 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 27 04:23:49.011011 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] May 27 04:23:49.011176 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 27 04:23:49.011338 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] May 27 04:23:49.011501 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 27 04:23:49.011686 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] May 27 04:23:49.011848 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 27 04:23:49.012026 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] May 27 04:23:49.012189 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 27 04:23:49.012350 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] May 27 04:23:49.012532 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 27 04:23:49.012710 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] May 27 04:23:49.012873 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 27 04:23:49.013052 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] May 27 04:23:49.013215 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 27 04:23:49.013385 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] May 27 04:23:49.013550 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 27 04:23:49.013728 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] May 27 04:23:49.013891 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 27 04:23:49.014071 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] May 27 04:23:49.014233 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 27 04:23:49.014389 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 27 04:23:49.014537 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 27 04:23:49.014700 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 27 04:23:49.014857 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] May 27 04:23:49.015023 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 27 04:23:49.015177 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] May 27 04:23:49.015345 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 27 04:23:49.015499 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] May 27 04:23:49.015665 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 27 04:23:49.015839 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] May 27 04:23:49.016029 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] May 27 04:23:49.016186 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] May 27 04:23:49.016338 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 27 04:23:49.016505 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] May 27 04:23:49.016672 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] May 27 04:23:49.016824 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 27 04:23:49.017013 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] May 27 04:23:49.017169 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] May 27 04:23:49.017320 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 27 04:23:49.017488 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] May 27 04:23:49.017654 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] May 27 04:23:49.017807 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 27 04:23:49.017985 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] May 27 04:23:49.018151 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] May 27 04:23:49.018303 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 27 04:23:49.018463 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] May 27 04:23:49.018625 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] May 27 04:23:49.018781 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 27 04:23:49.018942 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] May 27 04:23:49.019119 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] May 27 04:23:49.019279 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 27 04:23:49.019301 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 27 04:23:49.019315 kernel: PCI: CLS 0 bytes, default 64 May 27 04:23:49.019329 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 27 04:23:49.019343 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) May 27 04:23:49.019357 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 27 04:23:49.019371 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns May 27 04:23:49.019384 kernel: Initialise system trusted keyrings May 27 04:23:49.019405 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 27 04:23:49.019419 kernel: Key type asymmetric registered May 27 04:23:49.019437 kernel: Asymmetric key parser 'x509' registered May 27 04:23:49.019450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 27 04:23:49.019464 kernel: io scheduler mq-deadline registered May 27 04:23:49.019477 kernel: io scheduler kyber registered May 27 04:23:49.019491 kernel: io scheduler bfq registered May 27 04:23:49.019670 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 27 04:23:49.019837 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 27 04:23:49.020032 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.020297 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 27 04:23:49.020544 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 27 04:23:49.020836 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.021081 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 27 04:23:49.021246 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 27 04:23:49.021418 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.021582 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 27 04:23:49.021760 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 27 04:23:49.021924 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.022113 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 27 04:23:49.022277 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 27 04:23:49.022449 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.022625 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 27 04:23:49.022799 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 27 04:23:49.022964 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.023148 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 27 04:23:49.023311 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 27 04:23:49.023481 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.023658 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 27 04:23:49.023822 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 27 04:23:49.024007 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 27 04:23:49.024029 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 27 04:23:49.024044 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 27 04:23:49.024064 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 27 04:23:49.024078 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 04:23:49.024092 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 27 04:23:49.024106 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 27 04:23:49.024119 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 27 04:23:49.024132 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 27 04:23:49.024146 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 27 04:23:49.024319 kernel: rtc_cmos 00:03: RTC can wake from S4 May 27 04:23:49.024474 kernel: rtc_cmos 00:03: registered as rtc0 May 27 04:23:49.024645 kernel: rtc_cmos 00:03: setting system clock to 2025-05-27T04:23:48 UTC (1748319828) May 27 04:23:49.024799 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 27 04:23:49.024819 kernel: intel_pstate: CPU model not supported May 27 04:23:49.024833 kernel: NET: Registered PF_INET6 protocol family May 27 04:23:49.024847 kernel: Segment Routing with IPv6 May 27 04:23:49.024860 kernel: In-situ OAM (IOAM) with IPv6 May 27 04:23:49.024874 kernel: NET: Registered PF_PACKET protocol family May 27 04:23:49.024888 kernel: Key type dns_resolver registered May 27 04:23:49.024907 kernel: IPI shorthand broadcast: enabled May 27 04:23:49.024921 kernel: sched_clock: Marking stable (3527003660, 229464240)->(3891546551, -135078651) May 27 04:23:49.024935 kernel: registered taskstats version 1 May 27 04:23:49.024948 kernel: Loading compiled-in X.509 certificates May 27 04:23:49.024962 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: ba9eddccb334a70147f3ddfe4fbde029feaa991d' May 27 04:23:49.024992 kernel: Demotion targets for Node 0: null May 27 04:23:49.025006 kernel: Key type .fscrypt registered May 27 04:23:49.025019 kernel: Key type fscrypt-provisioning registered May 27 04:23:49.025033 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 04:23:49.025052 kernel: ima: Allocated hash algorithm: sha1 May 27 04:23:49.025066 kernel: ima: No architecture policies found May 27 04:23:49.025079 kernel: clk: Disabling unused clocks May 27 04:23:49.025092 kernel: Warning: unable to open an initial console. May 27 04:23:49.025106 kernel: Freeing unused kernel image (initmem) memory: 54416K May 27 04:23:49.025119 kernel: Write protecting the kernel read-only data: 24576k May 27 04:23:49.025137 kernel: Freeing unused kernel image (rodata/data gap) memory: 288K May 27 04:23:49.025152 kernel: Run /init as init process May 27 04:23:49.025165 kernel: with arguments: May 27 04:23:49.025183 kernel: /init May 27 04:23:49.025200 kernel: with environment: May 27 04:23:49.025214 kernel: HOME=/ May 27 04:23:49.025228 kernel: TERM=linux May 27 04:23:49.025241 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 04:23:49.025264 systemd[1]: Successfully made /usr/ read-only. May 27 04:23:49.025283 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 04:23:49.025304 systemd[1]: Detected virtualization kvm. May 27 04:23:49.025318 systemd[1]: Detected architecture x86-64. May 27 04:23:49.025332 systemd[1]: Running in initrd. May 27 04:23:49.025352 systemd[1]: No hostname configured, using default hostname. May 27 04:23:49.025368 systemd[1]: Hostname set to . May 27 04:23:49.025391 systemd[1]: Initializing machine ID from VM UUID. May 27 04:23:49.025407 systemd[1]: Queued start job for default target initrd.target. May 27 04:23:49.025421 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 04:23:49.025436 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 04:23:49.025457 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 04:23:49.025471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 04:23:49.025486 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 04:23:49.025501 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 04:23:49.025528 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 04:23:49.025543 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 04:23:49.025563 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 04:23:49.025577 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 04:23:49.025592 systemd[1]: Reached target paths.target - Path Units. May 27 04:23:49.025606 systemd[1]: Reached target slices.target - Slice Units. May 27 04:23:49.025631 systemd[1]: Reached target swap.target - Swaps. May 27 04:23:49.025646 systemd[1]: Reached target timers.target - Timer Units. May 27 04:23:49.025660 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 04:23:49.025675 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 04:23:49.025689 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 04:23:49.025709 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 04:23:49.025724 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 04:23:49.025738 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 04:23:49.025752 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 04:23:49.025766 systemd[1]: Reached target sockets.target - Socket Units. May 27 04:23:49.025780 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 04:23:49.025794 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 04:23:49.025809 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 04:23:49.025828 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 04:23:49.025842 systemd[1]: Starting systemd-fsck-usr.service... May 27 04:23:49.025857 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 04:23:49.025871 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 04:23:49.025885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 04:23:49.025899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 04:23:49.025986 systemd-journald[230]: Collecting audit messages is disabled. May 27 04:23:49.026024 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 04:23:49.026039 systemd[1]: Finished systemd-fsck-usr.service. May 27 04:23:49.026060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 04:23:49.026075 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 04:23:49.026089 kernel: Bridge firewalling registered May 27 04:23:49.026104 systemd-journald[230]: Journal started May 27 04:23:49.026136 systemd-journald[230]: Runtime Journal (/run/log/journal/00923123c62044c8a85fe93ff41448b4) is 4.7M, max 38.2M, 33.4M free. May 27 04:23:48.967453 systemd-modules-load[231]: Inserted module 'overlay' May 27 04:23:49.086163 systemd[1]: Started systemd-journald.service - Journal Service. May 27 04:23:49.025549 systemd-modules-load[231]: Inserted module 'br_netfilter' May 27 04:23:49.085237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 04:23:49.090560 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 04:23:49.095603 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 04:23:49.099157 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 04:23:49.105056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 04:23:49.108183 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 04:23:49.114593 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 04:23:49.126368 systemd-tmpfiles[248]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 04:23:49.129441 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 04:23:49.136435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 04:23:49.139571 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 04:23:49.145335 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 04:23:49.146388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 04:23:49.151122 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 04:23:49.176419 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f6c186658a19d5a08471ef76df75f82494b37b46908f9237b2c3cf497da860c6 May 27 04:23:49.199764 systemd-resolved[265]: Positive Trust Anchors: May 27 04:23:49.199795 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 04:23:49.199840 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 04:23:49.204778 systemd-resolved[265]: Defaulting to hostname 'linux'. May 27 04:23:49.206725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 04:23:49.207681 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 04:23:49.297042 kernel: SCSI subsystem initialized May 27 04:23:49.309017 kernel: Loading iSCSI transport class v2.0-870. May 27 04:23:49.323018 kernel: iscsi: registered transport (tcp) May 27 04:23:49.349536 kernel: iscsi: registered transport (qla4xxx) May 27 04:23:49.349626 kernel: QLogic iSCSI HBA Driver May 27 04:23:49.376837 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 04:23:49.404718 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 04:23:49.406409 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 04:23:49.472136 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 04:23:49.475264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 04:23:49.538038 kernel: raid6: sse2x4 gen() 13775 MB/s May 27 04:23:49.556049 kernel: raid6: sse2x2 gen() 9606 MB/s May 27 04:23:49.574671 kernel: raid6: sse2x1 gen() 9877 MB/s May 27 04:23:49.574783 kernel: raid6: using algorithm sse2x4 gen() 13775 MB/s May 27 04:23:49.593700 kernel: raid6: .... xor() 7790 MB/s, rmw enabled May 27 04:23:49.593804 kernel: raid6: using ssse3x2 recovery algorithm May 27 04:23:49.619108 kernel: xor: automatically using best checksumming function avx May 27 04:23:49.811059 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 04:23:49.819678 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 04:23:49.823153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 04:23:49.861542 systemd-udevd[477]: Using default interface naming scheme 'v255'. May 27 04:23:49.871285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 04:23:49.875592 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 04:23:49.904397 dracut-pre-trigger[488]: rd.md=0: removing MD RAID activation May 27 04:23:49.938637 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 04:23:49.942397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 04:23:50.066498 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 04:23:50.069858 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 04:23:50.177219 kernel: ACPI: bus type USB registered May 27 04:23:50.177287 kernel: usbcore: registered new interface driver usbfs May 27 04:23:50.179146 kernel: usbcore: registered new interface driver hub May 27 04:23:50.180604 kernel: usbcore: registered new device driver usb May 27 04:23:50.196996 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues May 27 04:23:50.225064 kernel: cryptd: max_cpu_qlen set to 1000 May 27 04:23:50.240323 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 27 04:23:50.255857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 04:23:50.255953 kernel: GPT:17805311 != 125829119 May 27 04:23:50.255994 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 04:23:50.256026 kernel: GPT:17805311 != 125829119 May 27 04:23:50.256043 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 04:23:50.256061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 04:23:50.262535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 04:23:50.263936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 04:23:50.268121 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 04:23:50.272289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 04:23:50.273473 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 04:23:50.284003 kernel: libata version 3.00 loaded. May 27 04:23:50.302000 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 May 27 04:23:50.302063 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 27 04:23:50.305381 kernel: AES CTR mode by8 optimization enabled May 27 04:23:50.316999 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 May 27 04:23:50.325416 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 27 04:23:50.333010 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller May 27 04:23:50.333284 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 May 27 04:23:50.333491 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed May 27 04:23:50.349132 kernel: hub 1-0:1.0: USB hub found May 27 04:23:50.349428 kernel: hub 1-0:1.0: 4 ports detected May 27 04:23:50.349647 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 27 04:23:50.349875 kernel: hub 2-0:1.0: USB hub found May 27 04:23:50.350121 kernel: hub 2-0:1.0: 4 ports detected May 27 04:23:50.386995 kernel: ahci 0000:00:1f.2: version 3.0 May 27 04:23:50.399482 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 27 04:23:50.399538 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode May 27 04:23:50.399789 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) May 27 04:23:50.400009 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 27 04:23:50.410497 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 04:23:50.492778 kernel: scsi host0: ahci May 27 04:23:50.493070 kernel: scsi host1: ahci May 27 04:23:50.493270 kernel: scsi host2: ahci May 27 04:23:50.493466 kernel: scsi host3: ahci May 27 04:23:50.493675 kernel: scsi host4: ahci May 27 04:23:50.493869 kernel: scsi host5: ahci May 27 04:23:50.494301 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 0 May 27 04:23:50.494324 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 0 May 27 04:23:50.494342 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 0 May 27 04:23:50.494360 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 0 May 27 04:23:50.494377 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 0 May 27 04:23:50.494395 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 0 May 27 04:23:50.492256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 04:23:50.503834 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 04:23:50.504800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 04:23:50.518475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 04:23:50.531056 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 04:23:50.534127 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 04:23:50.564088 disk-uuid[640]: Primary Header is updated. May 27 04:23:50.564088 disk-uuid[640]: Secondary Entries is updated. May 27 04:23:50.564088 disk-uuid[640]: Secondary Header is updated. May 27 04:23:50.570004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 04:23:50.578010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 04:23:50.583057 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 27 04:23:50.735991 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 27 04:23:50.736055 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 27 04:23:50.736074 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 27 04:23:50.737049 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 27 04:23:50.739676 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 27 04:23:50.741508 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 27 04:23:50.762993 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 04:23:50.769536 kernel: usbcore: registered new interface driver usbhid May 27 04:23:50.769576 kernel: usbhid: USB HID core driver May 27 04:23:50.777033 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 May 27 04:23:50.780988 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 May 27 04:23:50.799134 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 04:23:50.800752 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 04:23:50.801646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 04:23:50.803309 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 04:23:50.806076 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 04:23:50.832266 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 04:23:51.582096 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 04:23:51.582800 disk-uuid[641]: The operation has completed successfully. May 27 04:23:51.637811 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 04:23:51.638009 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 04:23:51.685342 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 04:23:51.719675 sh[668]: Success May 27 04:23:51.745395 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 04:23:51.745489 kernel: device-mapper: uevent: version 1.0.3 May 27 04:23:51.746430 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 04:23:51.760998 kernel: device-mapper: verity: sha256 using shash "sha256-avx" May 27 04:23:51.822475 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 04:23:51.828075 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 04:23:51.841937 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 04:23:51.857444 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 04:23:51.857502 kernel: BTRFS: device fsid f0f66fe8-3990-49eb-980e-559a3dfd3522 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (680) May 27 04:23:51.863454 kernel: BTRFS info (device dm-0): first mount of filesystem f0f66fe8-3990-49eb-980e-559a3dfd3522 May 27 04:23:51.863496 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 27 04:23:51.865376 kernel: BTRFS info (device dm-0): using free-space-tree May 27 04:23:51.876515 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 04:23:51.877826 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 04:23:51.878963 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 04:23:51.880094 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 04:23:51.883607 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 04:23:51.915025 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (713) May 27 04:23:51.915099 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 04:23:51.918399 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 04:23:51.920303 kernel: BTRFS info (device vda6): using free-space-tree May 27 04:23:51.934013 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 04:23:51.934767 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 04:23:51.939142 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 04:23:52.032868 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 04:23:52.038212 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 04:23:52.090213 systemd-networkd[850]: lo: Link UP May 27 04:23:52.090226 systemd-networkd[850]: lo: Gained carrier May 27 04:23:52.092411 systemd-networkd[850]: Enumeration completed May 27 04:23:52.092557 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 04:23:52.093485 systemd[1]: Reached target network.target - Network. May 27 04:23:52.094530 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 04:23:52.094537 systemd-networkd[850]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 04:23:52.096421 systemd-networkd[850]: eth0: Link UP May 27 04:23:52.096426 systemd-networkd[850]: eth0: Gained carrier May 27 04:23:52.096439 systemd-networkd[850]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 04:23:52.149063 systemd-networkd[850]: eth0: DHCPv4 address 10.244.19.66/30, gateway 10.244.19.65 acquired from 10.244.19.65 May 27 04:23:52.155004 ignition[766]: Ignition 2.21.0 May 27 04:23:52.155024 ignition[766]: Stage: fetch-offline May 27 04:23:52.155100 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 27 04:23:52.155117 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 04:23:52.155272 ignition[766]: parsed url from cmdline: "" May 27 04:23:52.159251 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 04:23:52.155279 ignition[766]: no config URL provided May 27 04:23:52.155289 ignition[766]: reading system config file "/usr/lib/ignition/user.ign" May 27 04:23:52.162184 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 27 04:23:52.155304 ignition[766]: no config at "/usr/lib/ignition/user.ign" May 27 04:23:52.155313 ignition[766]: failed to fetch config: resource requires networking May 27 04:23:52.155535 ignition[766]: Ignition finished successfully May 27 04:23:52.197925 ignition[860]: Ignition 2.21.0 May 27 04:23:52.197954 ignition[860]: Stage: fetch May 27 04:23:52.198798 ignition[860]: no configs at "/usr/lib/ignition/base.d" May 27 04:23:52.198819 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 04:23:52.199230 ignition[860]: parsed url from cmdline: "" May 27 04:23:52.199241 ignition[860]: no config URL provided May 27 04:23:52.199252 ignition[860]: reading system config file "/usr/lib/ignition/user.ign" May 27 04:23:52.199281 ignition[860]: no config at "/usr/lib/ignition/user.ign" May 27 04:23:52.199732 ignition[860]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 May 27 04:23:52.200855 ignition[860]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... May 27 04:23:52.200893 ignition[860]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... May 27 04:23:52.218234 ignition[860]: GET result: OK May 27 04:23:52.218617 ignition[860]: parsing config with SHA512: 5242690a519a9297b2209c27d78ad34f606992a537d6175ce4decc06e59d4c1c947245d39f2fbbbd3d1d9f47e3d1ddae5ba803d0eb75af66992fc94f65d09c95 May 27 04:23:52.225144 unknown[860]: fetched base config from "system" May 27 04:23:52.225162 unknown[860]: fetched base config from "system" May 27 04:23:52.225835 ignition[860]: fetch: fetch complete May 27 04:23:52.225199 unknown[860]: fetched user config from "openstack" May 27 04:23:52.225844 ignition[860]: fetch: fetch passed May 27 04:23:52.228465 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 27 04:23:52.225909 ignition[860]: Ignition finished successfully May 27 04:23:52.231872 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 04:23:52.267518 ignition[866]: Ignition 2.21.0 May 27 04:23:52.267545 ignition[866]: Stage: kargs May 27 04:23:52.267788 ignition[866]: no configs at "/usr/lib/ignition/base.d" May 27 04:23:52.267807 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 04:23:52.270185 ignition[866]: kargs: kargs passed May 27 04:23:52.272404 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 04:23:52.270281 ignition[866]: Ignition finished successfully May 27 04:23:52.275432 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 04:23:52.311025 ignition[873]: Ignition 2.21.0 May 27 04:23:52.311048 ignition[873]: Stage: disks May 27 04:23:52.311486 ignition[873]: no configs at "/usr/lib/ignition/base.d" May 27 04:23:52.311507 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 04:23:52.313659 ignition[873]: disks: disks passed May 27 04:23:52.315783 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 04:23:52.313740 ignition[873]: Ignition finished successfully May 27 04:23:52.317193 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 04:23:52.318331 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 04:23:52.319796 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 04:23:52.321373 systemd[1]: Reached target sysinit.target - System Initialization. May 27 04:23:52.323063 systemd[1]: Reached target basic.target - Basic System. May 27 04:23:52.326139 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 04:23:52.374322 systemd-fsck[882]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks May 27 04:23:52.378282 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 04:23:52.381028 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 04:23:52.510146 kernel: EXT4-fs (vda9): mounted filesystem 18301365-b380-45d7-9677-e42472a122bc r/w with ordered data mode. Quota mode: none. May 27 04:23:52.511299 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 04:23:52.512592 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 04:23:52.515960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 04:23:52.520068 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 04:23:52.522230 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 04:23:52.526031 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... May 27 04:23:52.528527 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 04:23:52.529662 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 04:23:52.535275 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 04:23:52.541138 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 04:23:52.546103 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (890) May 27 04:23:52.550401 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 04:23:52.550448 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 04:23:52.551681 kernel: BTRFS info (device vda6): using free-space-tree May 27 04:23:52.558262 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 04:23:52.638025 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:23:52.640875 initrd-setup-root[918]: cut: /sysroot/etc/passwd: No such file or directory May 27 04:23:52.649204 initrd-setup-root[925]: cut: /sysroot/etc/group: No such file or directory May 27 04:23:52.657965 initrd-setup-root[932]: cut: /sysroot/etc/shadow: No such file or directory May 27 04:23:52.665764 initrd-setup-root[939]: cut: /sysroot/etc/gshadow: No such file or directory May 27 04:23:52.775883 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 04:23:52.778713 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 04:23:52.780399 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 04:23:52.811061 kernel: BTRFS info (device vda6): last unmount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 04:23:52.825543 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 04:23:52.841939 ignition[1011]: INFO : Ignition 2.21.0 May 27 04:23:52.841939 ignition[1011]: INFO : Stage: mount May 27 04:23:52.843771 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 04:23:52.843771 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 04:23:52.843771 ignition[1011]: INFO : mount: mount passed May 27 04:23:52.843771 ignition[1011]: INFO : Ignition finished successfully May 27 04:23:52.845930 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 04:23:52.856227 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 04:23:53.460263 systemd-networkd[850]: eth0: Gained IPv6LL May 27 04:23:53.676032 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:23:54.969595 systemd-networkd[850]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4d0:24:19ff:fef4:1342/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4d0:24:19ff:fef4:1342/64 assigned by NDisc. May 27 04:23:54.969619 systemd-networkd[850]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 27 04:23:55.687012 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:23:59.695042 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:23:59.701531 coreos-metadata[892]: May 27 04:23:59.701 WARN failed to locate config-drive, using the metadata service API instead May 27 04:23:59.724871 coreos-metadata[892]: May 27 04:23:59.724 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 27 04:23:59.746139 coreos-metadata[892]: May 27 04:23:59.746 INFO Fetch successful May 27 04:23:59.747142 coreos-metadata[892]: May 27 04:23:59.747 INFO wrote hostname srv-g11ua.gb1.brightbox.com to /sysroot/etc/hostname May 27 04:23:59.748939 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. May 27 04:23:59.749134 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. May 27 04:23:59.754175 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 04:23:59.792884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 04:23:59.823013 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1027) May 27 04:23:59.827454 kernel: BTRFS info (device vda6): first mount of filesystem fd7bb961-7a0f-4c90-a609-3bffeb956d05 May 27 04:23:59.827513 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 27 04:23:59.829422 kernel: BTRFS info (device vda6): using free-space-tree May 27 04:23:59.837575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 04:23:59.871451 ignition[1045]: INFO : Ignition 2.21.0 May 27 04:23:59.871451 ignition[1045]: INFO : Stage: files May 27 04:23:59.874037 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 04:23:59.874037 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 04:23:59.874037 ignition[1045]: DEBUG : files: compiled without relabeling support, skipping May 27 04:23:59.877682 ignition[1045]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 04:23:59.877682 ignition[1045]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 04:23:59.885700 ignition[1045]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 04:23:59.885700 ignition[1045]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 04:23:59.887732 ignition[1045]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 04:23:59.886636 unknown[1045]: wrote ssh authorized keys file for user: core May 27 04:23:59.889855 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 04:23:59.889855 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 27 04:24:00.245928 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 04:24:01.124615 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 27 04:24:01.124615 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 04:24:01.124615 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 27 04:24:01.772421 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 04:24:02.534732 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 04:24:02.544435 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 04:24:02.544435 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 04:24:02.544435 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 04:24:02.544435 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 04:24:02.544435 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 04:24:02.544435 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 04:24:02.544435 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 27 04:24:03.141274 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 04:24:05.389100 ignition[1045]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 27 04:24:05.389100 ignition[1045]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 04:24:05.393258 ignition[1045]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 04:24:05.394596 ignition[1045]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 04:24:05.394596 ignition[1045]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 04:24:05.394596 ignition[1045]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" May 27 04:24:05.394596 ignition[1045]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" May 27 04:24:05.401875 ignition[1045]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 04:24:05.401875 ignition[1045]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 04:24:05.401875 ignition[1045]: INFO : files: files passed May 27 04:24:05.401875 ignition[1045]: INFO : Ignition finished successfully May 27 04:24:05.398060 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 04:24:05.406111 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 04:24:05.412228 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 04:24:05.430023 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 04:24:05.430578 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 04:24:05.444863 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 04:24:05.444863 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 04:24:05.448643 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 04:24:05.448957 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 04:24:05.451367 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 04:24:05.453885 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 04:24:05.529782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 04:24:05.530031 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 04:24:05.531869 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 04:24:05.533209 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 04:24:05.534855 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 04:24:05.536756 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 04:24:05.575293 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 04:24:05.578440 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 04:24:05.608881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 04:24:05.610152 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 04:24:05.611817 systemd[1]: Stopped target timers.target - Timer Units. May 27 04:24:05.613373 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 04:24:05.613656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 04:24:05.615240 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 04:24:05.616183 systemd[1]: Stopped target basic.target - Basic System. May 27 04:24:05.617797 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 04:24:05.619327 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 04:24:05.620778 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 04:24:05.622246 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 04:24:05.624047 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 04:24:05.625620 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 04:24:05.627461 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 04:24:05.628919 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 04:24:05.630741 systemd[1]: Stopped target swap.target - Swaps. May 27 04:24:05.632101 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 04:24:05.632395 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 04:24:05.634007 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 04:24:05.634871 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 04:24:05.636505 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 04:24:05.636706 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 04:24:05.638074 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 04:24:05.638259 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 04:24:05.646241 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 04:24:05.646558 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 04:24:05.648135 systemd[1]: ignition-files.service: Deactivated successfully. May 27 04:24:05.648292 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 04:24:05.652023 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 04:24:05.656066 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 04:24:05.658014 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 04:24:05.658214 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 04:24:05.660660 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 04:24:05.661127 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 04:24:05.670732 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 04:24:05.670894 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 04:24:05.693845 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 04:24:05.699470 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 04:24:05.700514 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 04:24:05.705106 ignition[1099]: INFO : Ignition 2.21.0 May 27 04:24:05.705106 ignition[1099]: INFO : Stage: umount May 27 04:24:05.708196 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 04:24:05.708196 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" May 27 04:24:05.708196 ignition[1099]: INFO : umount: umount passed May 27 04:24:05.708196 ignition[1099]: INFO : Ignition finished successfully May 27 04:24:05.708767 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 04:24:05.708963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 04:24:05.710677 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 04:24:05.710940 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 04:24:05.712561 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 04:24:05.712652 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 04:24:05.714137 systemd[1]: ignition-fetch.service: Deactivated successfully. May 27 04:24:05.714243 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 27 04:24:05.715607 systemd[1]: Stopped target network.target - Network. May 27 04:24:05.717476 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 04:24:05.717593 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 04:24:05.718452 systemd[1]: Stopped target paths.target - Path Units. May 27 04:24:05.720082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 04:24:05.724284 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 04:24:05.725365 systemd[1]: Stopped target slices.target - Slice Units. May 27 04:24:05.726633 systemd[1]: Stopped target sockets.target - Socket Units. May 27 04:24:05.733286 systemd[1]: iscsid.socket: Deactivated successfully. May 27 04:24:05.733428 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 04:24:05.735744 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 04:24:05.735830 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 04:24:05.736560 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 04:24:05.736682 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 04:24:05.737442 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 04:24:05.737510 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 04:24:05.738237 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 04:24:05.738311 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 04:24:05.739305 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 04:24:05.741867 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 04:24:05.744331 systemd-networkd[850]: eth0: DHCPv6 lease lost May 27 04:24:05.751810 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 04:24:05.752103 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 04:24:05.763130 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 04:24:05.764471 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 04:24:05.765950 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 04:24:05.771577 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 04:24:05.772997 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 04:24:05.774330 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 04:24:05.774463 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 04:24:05.779146 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 04:24:05.779965 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 04:24:05.780110 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 04:24:05.783212 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 04:24:05.783314 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 04:24:05.793493 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 04:24:05.793645 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 04:24:05.796577 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 04:24:05.796681 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 04:24:05.800260 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 04:24:05.809647 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 04:24:05.809817 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 04:24:05.813488 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 04:24:05.814397 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 04:24:05.818052 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 04:24:05.818886 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 04:24:05.820864 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 04:24:05.820919 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 04:24:05.821840 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 04:24:05.821918 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 04:24:05.825228 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 04:24:05.825303 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 04:24:05.826219 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 04:24:05.826296 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 04:24:05.829638 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 04:24:05.831321 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 04:24:05.831407 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 04:24:05.835210 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 04:24:05.835293 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 04:24:05.845198 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 04:24:05.845295 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 04:24:05.847118 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 04:24:05.847186 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 04:24:05.848057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 04:24:05.848122 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 04:24:05.852052 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 04:24:05.852139 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 04:24:05.852206 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 04:24:05.852274 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 04:24:05.853014 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 04:24:05.853183 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 04:24:05.854713 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 04:24:05.854852 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 04:24:05.856770 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 04:24:05.859041 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 04:24:05.884243 systemd[1]: Switching root. May 27 04:24:05.920402 systemd-journald[230]: Journal stopped May 27 04:24:07.736863 systemd-journald[230]: Received SIGTERM from PID 1 (systemd). May 27 04:24:07.736965 kernel: SELinux: policy capability network_peer_controls=1 May 27 04:24:07.737025 kernel: SELinux: policy capability open_perms=1 May 27 04:24:07.737047 kernel: SELinux: policy capability extended_socket_class=1 May 27 04:24:07.737079 kernel: SELinux: policy capability always_check_network=0 May 27 04:24:07.737109 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 04:24:07.737134 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 04:24:07.737152 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 04:24:07.737182 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 04:24:07.737203 kernel: SELinux: policy capability userspace_initial_context=0 May 27 04:24:07.737222 kernel: audit: type=1403 audit(1748319846.282:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 04:24:07.737249 systemd[1]: Successfully loaded SELinux policy in 52.914ms. May 27 04:24:07.737278 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.787ms. May 27 04:24:07.737325 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 04:24:07.737350 systemd[1]: Detected virtualization kvm. May 27 04:24:07.737371 systemd[1]: Detected architecture x86-64. May 27 04:24:07.737398 systemd[1]: Detected first boot. May 27 04:24:07.737419 systemd[1]: Hostname set to . May 27 04:24:07.737439 systemd[1]: Initializing machine ID from VM UUID. May 27 04:24:07.737459 zram_generator::config[1142]: No configuration found. May 27 04:24:07.737479 kernel: Guest personality initialized and is inactive May 27 04:24:07.737510 kernel: VMCI host device registered (name=vmci, major=10, minor=125) May 27 04:24:07.737530 kernel: Initialized host personality May 27 04:24:07.737549 kernel: NET: Registered PF_VSOCK protocol family May 27 04:24:07.737569 systemd[1]: Populated /etc with preset unit settings. May 27 04:24:07.737590 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 04:24:07.737610 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 04:24:07.737630 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 04:24:07.737651 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 04:24:07.737683 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 04:24:07.737707 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 04:24:07.737728 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 04:24:07.737767 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 04:24:07.737800 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 04:24:07.737822 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 04:24:07.737855 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 04:24:07.737876 systemd[1]: Created slice user.slice - User and Session Slice. May 27 04:24:07.737896 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 04:24:07.737927 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 04:24:07.737947 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 04:24:07.737993 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 04:24:07.738031 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 04:24:07.738054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 04:24:07.738074 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 27 04:24:07.738095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 04:24:07.738121 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 04:24:07.738142 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 04:24:07.738169 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 04:24:07.738190 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 04:24:07.738210 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 04:24:07.738242 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 04:24:07.738264 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 04:24:07.738285 systemd[1]: Reached target slices.target - Slice Units. May 27 04:24:07.738305 systemd[1]: Reached target swap.target - Swaps. May 27 04:24:07.738336 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 04:24:07.738381 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 04:24:07.738403 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 04:24:07.738423 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 04:24:07.738450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 04:24:07.738483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 04:24:07.738505 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 04:24:07.738525 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 04:24:07.738552 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 04:24:07.738573 systemd[1]: Mounting media.mount - External Media Directory... May 27 04:24:07.738594 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:07.738621 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 04:24:07.738641 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 04:24:07.738661 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 04:24:07.738694 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 04:24:07.738717 systemd[1]: Reached target machines.target - Containers. May 27 04:24:07.738738 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 04:24:07.738759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 04:24:07.738778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 04:24:07.738799 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 04:24:07.738820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 04:24:07.738840 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 04:24:07.738871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 04:24:07.738893 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 04:24:07.738913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 04:24:07.738933 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 04:24:07.738954 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 04:24:07.739023 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 04:24:07.739049 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 04:24:07.739069 systemd[1]: Stopped systemd-fsck-usr.service. May 27 04:24:07.739090 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 04:24:07.739125 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 04:24:07.739158 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 04:24:07.739181 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 04:24:07.739202 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 04:24:07.739221 kernel: fuse: init (API version 7.41) May 27 04:24:07.739251 kernel: loop: module loaded May 27 04:24:07.739283 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 04:24:07.739307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 04:24:07.739341 systemd[1]: verity-setup.service: Deactivated successfully. May 27 04:24:07.739596 systemd[1]: Stopped verity-setup.service. May 27 04:24:07.739626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:07.739648 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 04:24:07.739668 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 04:24:07.739688 systemd[1]: Mounted media.mount - External Media Directory. May 27 04:24:07.739709 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 04:24:07.739729 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 04:24:07.739749 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 04:24:07.739770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 04:24:07.739866 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 04:24:07.739891 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 04:24:07.739921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 04:24:07.739945 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 04:24:07.739965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 04:24:07.740009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 04:24:07.740031 kernel: ACPI: bus type drm_connector registered May 27 04:24:07.740050 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 04:24:07.740071 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 04:24:07.740107 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 04:24:07.740131 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 04:24:07.740151 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 04:24:07.740221 systemd-journald[1235]: Collecting audit messages is disabled. May 27 04:24:07.740261 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 04:24:07.740282 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 04:24:07.740328 systemd-journald[1235]: Journal started May 27 04:24:07.740376 systemd-journald[1235]: Runtime Journal (/run/log/journal/00923123c62044c8a85fe93ff41448b4) is 4.7M, max 38.2M, 33.4M free. May 27 04:24:07.289995 systemd[1]: Queued start job for default target multi-user.target. May 27 04:24:07.306077 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 04:24:07.743069 systemd[1]: Started systemd-journald.service - Journal Service. May 27 04:24:07.309092 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 04:24:07.747051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 04:24:07.748615 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 04:24:07.763009 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 04:24:07.772375 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 04:24:07.779161 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 04:24:07.792220 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 04:24:07.794102 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 04:24:07.794166 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 04:24:07.797343 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 04:24:07.807149 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 04:24:07.808109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 04:24:07.812171 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 04:24:07.818023 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 04:24:07.818875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 04:24:07.822869 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 04:24:07.824105 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 04:24:07.831397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 04:24:07.835134 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 04:24:07.842341 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 04:24:07.849545 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 04:24:07.853226 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 04:24:07.854200 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 04:24:07.868361 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 04:24:07.869944 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 04:24:07.877193 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 04:24:07.878546 systemd-journald[1235]: Time spent on flushing to /var/log/journal/00923123c62044c8a85fe93ff41448b4 is 112.333ms for 1171 entries. May 27 04:24:07.878546 systemd-journald[1235]: System Journal (/var/log/journal/00923123c62044c8a85fe93ff41448b4) is 8M, max 584.8M, 576.8M free. May 27 04:24:08.055032 systemd-journald[1235]: Received client request to flush runtime journal. May 27 04:24:08.055144 kernel: loop0: detected capacity change from 0 to 8 May 27 04:24:08.055190 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 04:24:08.055221 kernel: loop1: detected capacity change from 0 to 146240 May 27 04:24:08.055255 kernel: loop2: detected capacity change from 0 to 113872 May 27 04:24:07.945868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 04:24:07.951626 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 04:24:07.982921 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 27 04:24:07.982944 systemd-tmpfiles[1280]: ACLs are not supported, ignoring. May 27 04:24:07.996461 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 04:24:08.003455 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 04:24:08.060200 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 04:24:08.115370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 04:24:08.116759 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 04:24:08.123202 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 04:24:08.135208 kernel: loop3: detected capacity change from 0 to 224512 May 27 04:24:08.188401 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. May 27 04:24:08.188430 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. May 27 04:24:08.195946 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 04:24:08.200163 kernel: loop4: detected capacity change from 0 to 8 May 27 04:24:08.204999 kernel: loop5: detected capacity change from 0 to 146240 May 27 04:24:08.235172 kernel: loop6: detected capacity change from 0 to 113872 May 27 04:24:08.255500 kernel: loop7: detected capacity change from 0 to 224512 May 27 04:24:08.286527 (sd-merge)[1305]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. May 27 04:24:08.287428 (sd-merge)[1305]: Merged extensions into '/usr'. May 27 04:24:08.297680 systemd[1]: Reload requested from client PID 1279 ('systemd-sysext') (unit systemd-sysext.service)... May 27 04:24:08.297727 systemd[1]: Reloading... May 27 04:24:08.415014 zram_generator::config[1331]: No configuration found. May 27 04:24:08.644450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 04:24:08.831121 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 04:24:08.831341 systemd[1]: Reloading finished in 532 ms. May 27 04:24:08.856483 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 04:24:08.884324 systemd[1]: Starting ensure-sysext.service... May 27 04:24:08.889326 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 04:24:08.932637 systemd[1]: Reload requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... May 27 04:24:08.932664 systemd[1]: Reloading... May 27 04:24:08.942047 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 04:24:08.942660 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 04:24:08.943214 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 04:24:08.943780 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 04:24:08.945888 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 04:24:08.946255 ldconfig[1274]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 04:24:08.946792 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. May 27 04:24:08.947055 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. May 27 04:24:08.953792 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. May 27 04:24:08.953928 systemd-tmpfiles[1388]: Skipping /boot May 27 04:24:08.983745 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. May 27 04:24:08.983941 systemd-tmpfiles[1388]: Skipping /boot May 27 04:24:09.051020 zram_generator::config[1419]: No configuration found. May 27 04:24:09.213560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 04:24:09.333824 systemd[1]: Reloading finished in 400 ms. May 27 04:24:09.347161 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 04:24:09.348515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 04:24:09.369145 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 04:24:09.380676 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 04:24:09.385360 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 04:24:09.392715 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 04:24:09.401450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 04:24:09.406907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 04:24:09.410995 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 04:24:09.417696 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:09.419518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 04:24:09.429386 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 04:24:09.433000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 04:24:09.438341 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 04:24:09.439778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 04:24:09.439959 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 04:24:09.440131 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:09.446909 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:09.447212 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 04:24:09.447475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 04:24:09.447607 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 04:24:09.452850 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 04:24:09.453668 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:09.469408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:09.469809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 04:24:09.479404 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 04:24:09.480846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 04:24:09.481035 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 04:24:09.481240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 27 04:24:09.483510 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 04:24:09.496107 systemd[1]: Finished ensure-sysext.service. May 27 04:24:09.511271 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 04:24:09.515602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 04:24:09.515903 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 04:24:09.518861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 04:24:09.519438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 04:24:09.525765 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 04:24:09.540541 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 04:24:09.541095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 04:24:09.550044 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 04:24:09.551068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 04:24:09.554594 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 04:24:09.571337 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 04:24:09.581209 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 04:24:09.583914 systemd-udevd[1480]: Using default interface naming scheme 'v255'. May 27 04:24:09.594690 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 04:24:09.596898 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 04:24:09.608591 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 04:24:09.647150 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 04:24:09.648530 augenrules[1520]: No rules May 27 04:24:09.649836 systemd[1]: audit-rules.service: Deactivated successfully. May 27 04:24:09.650885 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 04:24:09.663324 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 04:24:09.669207 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 04:24:09.798675 systemd-networkd[1530]: lo: Link UP May 27 04:24:09.798691 systemd-networkd[1530]: lo: Gained carrier May 27 04:24:09.799871 systemd-networkd[1530]: Enumeration completed May 27 04:24:09.800040 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 04:24:09.804141 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 04:24:09.809190 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 04:24:09.834033 systemd-resolved[1478]: Positive Trust Anchors: May 27 04:24:09.834532 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 04:24:09.834686 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 04:24:09.836099 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 04:24:09.838251 systemd[1]: Reached target time-set.target - System Time Set. May 27 04:24:09.846222 systemd-resolved[1478]: Using system hostname 'srv-g11ua.gb1.brightbox.com'. May 27 04:24:09.849554 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 04:24:09.851404 systemd[1]: Reached target network.target - Network. May 27 04:24:09.852123 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 04:24:09.854071 systemd[1]: Reached target sysinit.target - System Initialization. May 27 04:24:09.854910 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 04:24:09.856196 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 04:24:09.858022 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. May 27 04:24:09.859006 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 04:24:09.860455 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 04:24:09.862139 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 04:24:09.864076 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 04:24:09.864141 systemd[1]: Reached target paths.target - Path Units. May 27 04:24:09.864788 systemd[1]: Reached target timers.target - Timer Units. May 27 04:24:09.867840 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 04:24:09.872332 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 04:24:09.881438 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 04:24:09.882931 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 04:24:09.884407 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 04:24:09.897091 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 04:24:09.899665 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 04:24:09.904060 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 04:24:09.906342 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 04:24:09.914577 systemd[1]: Reached target sockets.target - Socket Units. May 27 04:24:09.915345 systemd[1]: Reached target basic.target - Basic System. May 27 04:24:09.917165 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 04:24:09.917221 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 04:24:09.919633 systemd[1]: Starting containerd.service - containerd container runtime... May 27 04:24:09.922541 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 27 04:24:09.926385 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 04:24:09.933259 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 04:24:09.938233 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 04:24:09.953999 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:09.958301 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 04:24:09.959113 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 04:24:09.964862 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... May 27 04:24:09.970241 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 04:24:09.977247 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 04:24:09.986254 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 04:24:09.994265 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 04:24:10.005842 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 04:24:10.009845 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 04:24:10.010577 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 04:24:10.013312 systemd[1]: Starting update-engine.service - Update Engine... May 27 04:24:10.021062 jq[1563]: false May 27 04:24:10.024014 extend-filesystems[1564]: Found loop4 May 27 04:24:10.024014 extend-filesystems[1564]: Found loop5 May 27 04:24:10.024014 extend-filesystems[1564]: Found loop6 May 27 04:24:10.024014 extend-filesystems[1564]: Found loop7 May 27 04:24:10.024014 extend-filesystems[1564]: Found vda May 27 04:24:10.024014 extend-filesystems[1564]: Found vda1 May 27 04:24:10.024014 extend-filesystems[1564]: Found vda2 May 27 04:24:10.024014 extend-filesystems[1564]: Found vda3 May 27 04:24:10.024014 extend-filesystems[1564]: Found usr May 27 04:24:10.024014 extend-filesystems[1564]: Found vda4 May 27 04:24:10.024014 extend-filesystems[1564]: Found vda6 May 27 04:24:10.024014 extend-filesystems[1564]: Found vda7 May 27 04:24:10.024014 extend-filesystems[1564]: Found vda9 May 27 04:24:10.027958 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 04:24:10.054634 oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 27 04:24:10.125642 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache May 27 04:24:10.125642 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting May 27 04:24:10.125642 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 04:24:10.125642 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache May 27 04:24:10.125642 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting May 27 04:24:10.125642 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 04:24:10.037441 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 04:24:10.098438 oslogin_cache_refresh[1566]: Failure getting users, quitting May 27 04:24:10.045778 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 04:24:10.098473 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. May 27 04:24:10.144298 update_engine[1574]: I20250527 04:24:10.079735 1574 main.cc:92] Flatcar Update Engine starting May 27 04:24:10.144298 update_engine[1574]: I20250527 04:24:10.116987 1574 update_check_scheduler.cc:74] Next update check in 6m0s May 27 04:24:10.046135 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 04:24:10.098557 oslogin_cache_refresh[1566]: Refreshing group entry cache May 27 04:24:10.046574 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 04:24:10.109449 oslogin_cache_refresh[1566]: Failure getting groups, quitting May 27 04:24:10.046862 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 04:24:10.109466 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. May 27 04:24:10.150074 jq[1575]: true May 27 04:24:10.087260 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 04:24:10.113339 dbus-daemon[1561]: [system] SELinux support is enabled May 27 04:24:10.087662 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 04:24:10.114112 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 04:24:10.121594 systemd[1]: google-oslogin-cache.service: Deactivated successfully. May 27 04:24:10.128644 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. May 27 04:24:10.132370 systemd[1]: Started update-engine.service - Update Engine. May 27 04:24:10.134559 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 04:24:10.134638 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 04:24:10.137091 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 04:24:10.137141 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 04:24:10.156478 (ntainerd)[1585]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 04:24:10.173867 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 04:24:10.183315 jq[1587]: true May 27 04:24:10.190795 systemd[1]: motdgen.service: Deactivated successfully. May 27 04:24:10.192864 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 04:24:10.205712 tar[1577]: linux-amd64/LICENSE May 27 04:24:10.205712 tar[1577]: linux-amd64/helm May 27 04:24:10.361174 bash[1617]: Updated "/home/core/.ssh/authorized_keys" May 27 04:24:10.373753 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 04:24:10.383639 systemd-networkd[1530]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 04:24:10.388633 systemd[1]: Starting sshkeys.service... May 27 04:24:10.390265 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 27 04:24:10.396058 systemd-networkd[1530]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 04:24:10.398231 systemd-networkd[1530]: eth0: Link UP May 27 04:24:10.398598 systemd-networkd[1530]: eth0: Gained carrier May 27 04:24:10.398629 systemd-networkd[1530]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 04:24:10.405218 systemd-logind[1573]: New seat seat0. May 27 04:24:10.410212 systemd[1]: Started systemd-logind.service - User Login Management. May 27 04:24:10.440430 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 27 04:24:10.452373 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 27 04:24:10.454312 systemd-networkd[1530]: eth0: DHCPv4 address 10.244.19.66/30, gateway 10.244.19.65 acquired from 10.244.19.65 May 27 04:24:10.459701 dbus-daemon[1561]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1530 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") May 27 04:24:10.470582 systemd-timesyncd[1494]: Network configuration changed, trying to establish connection. May 27 04:24:10.475182 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... May 27 04:24:10.544814 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:10.597890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 04:24:10.618294 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 04:24:10.629108 locksmithd[1593]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 04:24:10.691761 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 04:24:11.690552 systemd-timesyncd[1494]: Contacted time server 176.58.115.34:123 (0.flatcar.pool.ntp.org). May 27 04:24:11.690651 systemd-timesyncd[1494]: Initial clock synchronization to Tue 2025-05-27 04:24:11.690335 UTC. May 27 04:24:11.692964 systemd-resolved[1478]: Clock change detected. Flushing caches. May 27 04:24:11.766055 systemd[1]: Started systemd-hostnamed.service - Hostname Service. May 27 04:24:11.767659 dbus-daemon[1561]: [system] Successfully activated service 'org.freedesktop.hostname1' May 27 04:24:11.776729 dbus-daemon[1561]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1632 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") May 27 04:24:11.786125 systemd[1]: Starting polkit.service - Authorization Manager... May 27 04:24:11.787502 kernel: mousedev: PS/2 mouse device common for all mice May 27 04:24:11.865016 containerd[1585]: time="2025-05-27T04:24:11Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 04:24:11.878197 containerd[1585]: time="2025-05-27T04:24:11.877944708Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934126212Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="18.217µs" May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934177674Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934206491Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934547992Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934577646Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934624269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934745353Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.934767029Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.935056778Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.935084639Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.935103874Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 04:24:11.935280 containerd[1585]: time="2025-05-27T04:24:11.935118665Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 04:24:11.935862 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 May 27 04:24:11.935923 containerd[1585]: time="2025-05-27T04:24:11.935236707Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 04:24:11.937927 containerd[1585]: time="2025-05-27T04:24:11.937894152Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 04:24:11.939704 containerd[1585]: time="2025-05-27T04:24:11.939659950Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 04:24:11.939853 containerd[1585]: time="2025-05-27T04:24:11.939823616Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 04:24:11.940044 containerd[1585]: time="2025-05-27T04:24:11.939996413Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 04:24:11.941919 containerd[1585]: time="2025-05-27T04:24:11.941887979Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 04:24:11.942160 containerd[1585]: time="2025-05-27T04:24:11.942133240Z" level=info msg="metadata content store policy set" policy=shared May 27 04:24:11.942422 kernel: ACPI: button: Power Button [PWRF] May 27 04:24:11.949230 containerd[1585]: time="2025-05-27T04:24:11.949184062Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 04:24:11.949814 containerd[1585]: time="2025-05-27T04:24:11.949782935Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 04:24:11.950037 containerd[1585]: time="2025-05-27T04:24:11.950010017Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 04:24:11.950259 containerd[1585]: time="2025-05-27T04:24:11.950231034Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 04:24:11.950574 containerd[1585]: time="2025-05-27T04:24:11.950378317Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 04:24:11.950720 containerd[1585]: time="2025-05-27T04:24:11.950675277Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 04:24:11.951156 containerd[1585]: time="2025-05-27T04:24:11.951125445Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 04:24:11.951303 containerd[1585]: time="2025-05-27T04:24:11.951276294Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 04:24:11.951449 containerd[1585]: time="2025-05-27T04:24:11.951422343Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 04:24:11.951587 containerd[1585]: time="2025-05-27T04:24:11.951562591Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 04:24:11.951721 containerd[1585]: time="2025-05-27T04:24:11.951695587Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952124762Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952328657Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952375424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952422365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952451035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952471841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952490188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952507772Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952525023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952564193Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952586831Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952604424Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952730393Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952756538Z" level=info msg="Start snapshots syncer" May 27 04:24:11.953420 containerd[1585]: time="2025-05-27T04:24:11.952801470Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 04:24:11.954023 containerd[1585]: time="2025-05-27T04:24:11.953147506Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 04:24:11.954023 containerd[1585]: time="2025-05-27T04:24:11.953229863Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 04:24:11.958016 containerd[1585]: time="2025-05-27T04:24:11.957478927Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 04:24:11.958738 containerd[1585]: time="2025-05-27T04:24:11.958359162Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 04:24:11.958738 containerd[1585]: time="2025-05-27T04:24:11.958693685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 04:24:11.958920 containerd[1585]: time="2025-05-27T04:24:11.958891880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 04:24:11.959144 containerd[1585]: time="2025-05-27T04:24:11.959034127Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 04:24:11.959748 containerd[1585]: time="2025-05-27T04:24:11.959242196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 04:24:11.959748 containerd[1585]: time="2025-05-27T04:24:11.959277802Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 04:24:11.959748 containerd[1585]: time="2025-05-27T04:24:11.959675309Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 04:24:11.960056 containerd[1585]: time="2025-05-27T04:24:11.960009038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 04:24:11.961649 containerd[1585]: time="2025-05-27T04:24:11.960159860Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 04:24:11.961649 containerd[1585]: time="2025-05-27T04:24:11.961441187Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.961824309Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.961863008Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.961903475Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.961921300Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.961937985Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.961974672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.961995776Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.962044658Z" level=info msg="runtime interface created" May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.962076512Z" level=info msg="created NRI interface" May 27 04:24:11.962146 containerd[1585]: time="2025-05-27T04:24:11.962103427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 04:24:11.962857 containerd[1585]: time="2025-05-27T04:24:11.962130577Z" level=info msg="Connect containerd service" May 27 04:24:11.962857 containerd[1585]: time="2025-05-27T04:24:11.962693628Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 04:24:11.966890 containerd[1585]: time="2025-05-27T04:24:11.966847969Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 04:24:12.125147 polkitd[1642]: Started polkitd version 126 May 27 04:24:12.145350 polkitd[1642]: Loading rules from directory /etc/polkit-1/rules.d May 27 04:24:12.151965 polkitd[1642]: Loading rules from directory /run/polkit-1/rules.d May 27 04:24:12.152061 polkitd[1642]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 04:24:12.152438 polkitd[1642]: Loading rules from directory /usr/local/share/polkit-1/rules.d May 27 04:24:12.152483 polkitd[1642]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) May 27 04:24:12.152551 polkitd[1642]: Loading rules from directory /usr/share/polkit-1/rules.d May 27 04:24:12.157457 polkitd[1642]: Finished loading, compiling and executing 2 rules May 27 04:24:12.157911 systemd[1]: Started polkit.service - Authorization Manager. May 27 04:24:12.160713 dbus-daemon[1561]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' May 27 04:24:12.163572 polkitd[1642]: Acquired the name org.freedesktop.PolicyKit1 on the system bus May 27 04:24:12.212193 systemd-hostnamed[1632]: Hostname set to (static) May 27 04:24:12.236444 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 27 04:24:12.242641 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 27 04:24:12.263376 containerd[1585]: time="2025-05-27T04:24:12.263270902Z" level=info msg="Start subscribing containerd event" May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263428580Z" level=info msg="Start recovering state" May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263701260Z" level=info msg="Start event monitor" May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263738780Z" level=info msg="Start cni network conf syncer for default" May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263753995Z" level=info msg="Start streaming server" May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263775764Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263798527Z" level=info msg="runtime interface starting up..." May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263812122Z" level=info msg="starting plugins..." May 27 04:24:12.264161 containerd[1585]: time="2025-05-27T04:24:12.263844440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 04:24:12.264860 containerd[1585]: time="2025-05-27T04:24:12.264559211Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 04:24:12.265412 containerd[1585]: time="2025-05-27T04:24:12.265290677Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 04:24:12.271484 systemd[1]: Started containerd.service - containerd container runtime. May 27 04:24:12.275120 containerd[1585]: time="2025-05-27T04:24:12.275014474Z" level=info msg="containerd successfully booted in 0.410340s" May 27 04:24:12.402217 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 04:24:12.460947 systemd-networkd[1530]: eth0: Gained IPv6LL May 27 04:24:12.465443 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 04:24:12.467187 systemd[1]: Reached target network-online.target - Network is Online. May 27 04:24:12.472988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:24:12.477954 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 04:24:12.530973 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 04:24:12.588459 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 04:24:13.005571 systemd-logind[1573]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 27 04:24:13.036514 systemd-logind[1573]: Watching system buttons on /dev/input/event3 (Power Button) May 27 04:24:13.155343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 04:24:13.270787 tar[1577]: linux-amd64/README.md May 27 04:24:13.302338 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 04:24:13.408850 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 04:24:13.440717 systemd-networkd[1530]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:4d0:24:19ff:fef4:1342/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:4d0:24:19ff:fef4:1342/64 assigned by NDisc. May 27 04:24:13.440732 systemd-networkd[1530]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. May 27 04:24:13.445785 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 04:24:13.451896 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 04:24:13.457864 systemd[1]: Started sshd@0-10.244.19.66:22-139.178.68.195:38560.service - OpenSSH per-connection server daemon (139.178.68.195:38560). May 27 04:24:13.472153 systemd[1]: issuegen.service: Deactivated successfully. May 27 04:24:13.472927 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 04:24:13.482903 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 04:24:13.513464 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 04:24:13.519595 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 04:24:13.522751 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 27 04:24:13.524748 systemd[1]: Reached target getty.target - Login Prompts. May 27 04:24:13.879351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:24:13.897106 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 04:24:14.233359 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:14.233500 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:14.418293 sshd[1709]: Accepted publickey for core from 139.178.68.195 port 38560 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:14.419578 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:14.433074 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 04:24:14.437989 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 04:24:14.465538 systemd-logind[1573]: New session 1 of user core. May 27 04:24:14.488053 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 04:24:14.496624 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 04:24:14.514757 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 04:24:14.519543 kubelet[1724]: E0527 04:24:14.519492 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 04:24:14.520145 systemd-logind[1573]: New session c1 of user core. May 27 04:24:14.523320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 04:24:14.523781 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 04:24:14.525044 systemd[1]: kubelet.service: Consumed 1.053s CPU time, 264.8M memory peak. May 27 04:24:14.721376 systemd[1734]: Queued start job for default target default.target. May 27 04:24:14.732986 systemd[1734]: Created slice app.slice - User Application Slice. May 27 04:24:14.733035 systemd[1734]: Reached target paths.target - Paths. May 27 04:24:14.733117 systemd[1734]: Reached target timers.target - Timers. May 27 04:24:14.735438 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 04:24:14.757752 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 04:24:14.757949 systemd[1734]: Reached target sockets.target - Sockets. May 27 04:24:14.758032 systemd[1734]: Reached target basic.target - Basic System. May 27 04:24:14.758110 systemd[1734]: Reached target default.target - Main User Target. May 27 04:24:14.758178 systemd[1734]: Startup finished in 225ms. May 27 04:24:14.758278 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 04:24:14.772184 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 04:24:15.429684 systemd[1]: Started sshd@1-10.244.19.66:22-139.178.68.195:54086.service - OpenSSH per-connection server daemon (139.178.68.195:54086). May 27 04:24:16.262681 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:16.262851 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:16.356680 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 54086 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:16.360376 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:16.371936 systemd-logind[1573]: New session 2 of user core. May 27 04:24:16.380854 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 04:24:16.982569 sshd[1750]: Connection closed by 139.178.68.195 port 54086 May 27 04:24:16.982363 sshd-session[1746]: pam_unix(sshd:session): session closed for user core May 27 04:24:16.990080 systemd[1]: sshd@1-10.244.19.66:22-139.178.68.195:54086.service: Deactivated successfully. May 27 04:24:16.993725 systemd[1]: session-2.scope: Deactivated successfully. May 27 04:24:16.995244 systemd-logind[1573]: Session 2 logged out. Waiting for processes to exit. May 27 04:24:16.998266 systemd-logind[1573]: Removed session 2. May 27 04:24:18.146240 systemd[1]: Started sshd@2-10.244.19.66:22-139.178.68.195:54090.service - OpenSSH per-connection server daemon (139.178.68.195:54090). May 27 04:24:18.619703 login[1716]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 04:24:18.627783 login[1717]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 27 04:24:18.628840 systemd-logind[1573]: New session 3 of user core. May 27 04:24:18.638845 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 04:24:18.645170 systemd-logind[1573]: New session 4 of user core. May 27 04:24:18.650976 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 04:24:19.085735 sshd[1756]: Accepted publickey for core from 139.178.68.195 port 54090 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:19.088147 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:19.096868 systemd-logind[1573]: New session 5 of user core. May 27 04:24:19.108836 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 04:24:19.710691 sshd[1784]: Connection closed by 139.178.68.195 port 54090 May 27 04:24:19.711687 sshd-session[1756]: pam_unix(sshd:session): session closed for user core May 27 04:24:19.717694 systemd[1]: sshd@2-10.244.19.66:22-139.178.68.195:54090.service: Deactivated successfully. May 27 04:24:19.720243 systemd[1]: session-5.scope: Deactivated successfully. May 27 04:24:19.721980 systemd-logind[1573]: Session 5 logged out. Waiting for processes to exit. May 27 04:24:19.724093 systemd-logind[1573]: Removed session 5. May 27 04:24:20.283499 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:20.287424 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev May 27 04:24:20.297597 coreos-metadata[1627]: May 27 04:24:20.297 WARN failed to locate config-drive, using the metadata service API instead May 27 04:24:20.300168 coreos-metadata[1560]: May 27 04:24:20.300 WARN failed to locate config-drive, using the metadata service API instead May 27 04:24:20.322933 coreos-metadata[1560]: May 27 04:24:20.322 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 May 27 04:24:20.323375 coreos-metadata[1627]: May 27 04:24:20.322 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 May 27 04:24:20.331743 coreos-metadata[1560]: May 27 04:24:20.331 INFO Fetch failed with 404: resource not found May 27 04:24:20.331964 coreos-metadata[1560]: May 27 04:24:20.331 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 May 27 04:24:20.333460 coreos-metadata[1560]: May 27 04:24:20.333 INFO Fetch successful May 27 04:24:20.333647 coreos-metadata[1560]: May 27 04:24:20.333 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 May 27 04:24:20.345759 coreos-metadata[1560]: May 27 04:24:20.345 INFO Fetch successful May 27 04:24:20.345879 coreos-metadata[1560]: May 27 04:24:20.345 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 May 27 04:24:20.352596 coreos-metadata[1627]: May 27 04:24:20.352 INFO Fetch successful May 27 04:24:20.352756 coreos-metadata[1627]: May 27 04:24:20.352 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 May 27 04:24:20.365443 coreos-metadata[1560]: May 27 04:24:20.365 INFO Fetch successful May 27 04:24:20.365867 coreos-metadata[1560]: May 27 04:24:20.365 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 May 27 04:24:20.381016 coreos-metadata[1560]: May 27 04:24:20.380 INFO Fetch successful May 27 04:24:20.381498 coreos-metadata[1560]: May 27 04:24:20.381 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 May 27 04:24:20.383437 coreos-metadata[1627]: May 27 04:24:20.383 INFO Fetch successful May 27 04:24:20.385798 unknown[1627]: wrote ssh authorized keys file for user: core May 27 04:24:20.400475 coreos-metadata[1560]: May 27 04:24:20.400 INFO Fetch successful May 27 04:24:20.410876 update-ssh-keys[1794]: Updated "/home/core/.ssh/authorized_keys" May 27 04:24:20.420743 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 27 04:24:20.424184 systemd[1]: Finished sshkeys.service. May 27 04:24:20.442778 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 27 04:24:20.443601 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 04:24:20.443818 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 04:24:20.444081 systemd[1]: Startup finished in 3.641s (kernel) + 17.603s (initrd) + 13.226s (userspace) = 34.471s. May 27 04:24:24.774336 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 04:24:24.776862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:24:25.108188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:24:25.117910 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 04:24:25.186532 kubelet[1810]: E0527 04:24:25.186438 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 04:24:25.190687 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 04:24:25.190921 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 04:24:25.191374 systemd[1]: kubelet.service: Consumed 244ms CPU time, 107.9M memory peak. May 27 04:24:29.876713 systemd[1]: Started sshd@3-10.244.19.66:22-139.178.68.195:36466.service - OpenSSH per-connection server daemon (139.178.68.195:36466). May 27 04:24:30.790681 sshd[1817]: Accepted publickey for core from 139.178.68.195 port 36466 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:30.792651 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:30.800151 systemd-logind[1573]: New session 6 of user core. May 27 04:24:30.809667 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 04:24:31.412650 sshd[1819]: Connection closed by 139.178.68.195 port 36466 May 27 04:24:31.413584 sshd-session[1817]: pam_unix(sshd:session): session closed for user core May 27 04:24:31.419190 systemd[1]: sshd@3-10.244.19.66:22-139.178.68.195:36466.service: Deactivated successfully. May 27 04:24:31.421679 systemd[1]: session-6.scope: Deactivated successfully. May 27 04:24:31.424899 systemd-logind[1573]: Session 6 logged out. Waiting for processes to exit. May 27 04:24:31.427926 systemd-logind[1573]: Removed session 6. May 27 04:24:31.564127 systemd[1]: Started sshd@4-10.244.19.66:22-139.178.68.195:36470.service - OpenSSH per-connection server daemon (139.178.68.195:36470). May 27 04:24:32.471758 sshd[1825]: Accepted publickey for core from 139.178.68.195 port 36470 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:32.473731 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:32.480113 systemd-logind[1573]: New session 7 of user core. May 27 04:24:32.490722 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 04:24:33.085714 sshd[1827]: Connection closed by 139.178.68.195 port 36470 May 27 04:24:33.085570 sshd-session[1825]: pam_unix(sshd:session): session closed for user core May 27 04:24:33.090226 systemd-logind[1573]: Session 7 logged out. Waiting for processes to exit. May 27 04:24:33.090350 systemd[1]: sshd@4-10.244.19.66:22-139.178.68.195:36470.service: Deactivated successfully. May 27 04:24:33.092829 systemd[1]: session-7.scope: Deactivated successfully. May 27 04:24:33.095766 systemd-logind[1573]: Removed session 7. May 27 04:24:33.243865 systemd[1]: Started sshd@5-10.244.19.66:22-139.178.68.195:36478.service - OpenSSH per-connection server daemon (139.178.68.195:36478). May 27 04:24:34.158910 sshd[1833]: Accepted publickey for core from 139.178.68.195 port 36478 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:34.160903 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:34.169158 systemd-logind[1573]: New session 8 of user core. May 27 04:24:34.176681 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 04:24:34.779948 sshd[1835]: Connection closed by 139.178.68.195 port 36478 May 27 04:24:34.779130 sshd-session[1833]: pam_unix(sshd:session): session closed for user core May 27 04:24:34.783811 systemd[1]: sshd@5-10.244.19.66:22-139.178.68.195:36478.service: Deactivated successfully. May 27 04:24:34.785971 systemd[1]: session-8.scope: Deactivated successfully. May 27 04:24:34.787923 systemd-logind[1573]: Session 8 logged out. Waiting for processes to exit. May 27 04:24:34.798817 systemd-logind[1573]: Removed session 8. May 27 04:24:34.938862 systemd[1]: Started sshd@6-10.244.19.66:22-139.178.68.195:40006.service - OpenSSH per-connection server daemon (139.178.68.195:40006). May 27 04:24:35.409193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 04:24:35.411449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:24:35.648109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:24:35.661236 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 04:24:35.722430 kubelet[1851]: E0527 04:24:35.722292 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 04:24:35.725615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 04:24:35.725865 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 04:24:35.726722 systemd[1]: kubelet.service: Consumed 205ms CPU time, 110.3M memory peak. May 27 04:24:35.862881 sshd[1841]: Accepted publickey for core from 139.178.68.195 port 40006 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:35.864813 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:35.872384 systemd-logind[1573]: New session 9 of user core. May 27 04:24:35.883694 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 04:24:36.353946 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 04:24:36.354434 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 04:24:36.369680 sudo[1859]: pam_unix(sudo:session): session closed for user root May 27 04:24:36.515430 sshd[1858]: Connection closed by 139.178.68.195 port 40006 May 27 04:24:36.514307 sshd-session[1841]: pam_unix(sshd:session): session closed for user core May 27 04:24:36.519216 systemd[1]: sshd@6-10.244.19.66:22-139.178.68.195:40006.service: Deactivated successfully. May 27 04:24:36.521595 systemd[1]: session-9.scope: Deactivated successfully. May 27 04:24:36.524234 systemd-logind[1573]: Session 9 logged out. Waiting for processes to exit. May 27 04:24:36.526330 systemd-logind[1573]: Removed session 9. May 27 04:24:36.674925 systemd[1]: Started sshd@7-10.244.19.66:22-139.178.68.195:40020.service - OpenSSH per-connection server daemon (139.178.68.195:40020). May 27 04:24:37.577004 sshd[1865]: Accepted publickey for core from 139.178.68.195 port 40020 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:37.579480 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:37.588713 systemd-logind[1573]: New session 10 of user core. May 27 04:24:37.599711 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 04:24:38.054411 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 04:24:38.055599 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 04:24:38.063499 sudo[1869]: pam_unix(sudo:session): session closed for user root May 27 04:24:38.071520 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 04:24:38.071967 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 04:24:38.084977 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 04:24:38.134927 augenrules[1891]: No rules May 27 04:24:38.135814 systemd[1]: audit-rules.service: Deactivated successfully. May 27 04:24:38.136169 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 04:24:38.137852 sudo[1868]: pam_unix(sudo:session): session closed for user root May 27 04:24:38.280156 sshd[1867]: Connection closed by 139.178.68.195 port 40020 May 27 04:24:38.281173 sshd-session[1865]: pam_unix(sshd:session): session closed for user core May 27 04:24:38.286495 systemd-logind[1573]: Session 10 logged out. Waiting for processes to exit. May 27 04:24:38.287579 systemd[1]: sshd@7-10.244.19.66:22-139.178.68.195:40020.service: Deactivated successfully. May 27 04:24:38.289912 systemd[1]: session-10.scope: Deactivated successfully. May 27 04:24:38.292490 systemd-logind[1573]: Removed session 10. May 27 04:24:38.439931 systemd[1]: Started sshd@8-10.244.19.66:22-139.178.68.195:40028.service - OpenSSH per-connection server daemon (139.178.68.195:40028). May 27 04:24:39.357191 sshd[1900]: Accepted publickey for core from 139.178.68.195 port 40028 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:24:39.359063 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:24:39.366456 systemd-logind[1573]: New session 11 of user core. May 27 04:24:39.373625 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 04:24:39.835373 sudo[1903]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 04:24:39.836564 sudo[1903]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 04:24:40.343701 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 04:24:40.360095 (dockerd)[1920]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 04:24:40.719103 dockerd[1920]: time="2025-05-27T04:24:40.719012430Z" level=info msg="Starting up" May 27 04:24:40.721507 dockerd[1920]: time="2025-05-27T04:24:40.721429788Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 04:24:40.762954 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1024061684-merged.mount: Deactivated successfully. May 27 04:24:40.775557 systemd[1]: var-lib-docker-metacopy\x2dcheck2940579417-merged.mount: Deactivated successfully. May 27 04:24:40.808429 dockerd[1920]: time="2025-05-27T04:24:40.808273432Z" level=info msg="Loading containers: start." May 27 04:24:40.829081 kernel: Initializing XFRM netlink socket May 27 04:24:41.176447 systemd-networkd[1530]: docker0: Link UP May 27 04:24:41.181380 dockerd[1920]: time="2025-05-27T04:24:41.181139161Z" level=info msg="Loading containers: done." May 27 04:24:41.203179 dockerd[1920]: time="2025-05-27T04:24:41.203097797Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 04:24:41.203383 dockerd[1920]: time="2025-05-27T04:24:41.203276474Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 04:24:41.203627 dockerd[1920]: time="2025-05-27T04:24:41.203578131Z" level=info msg="Initializing buildkit" May 27 04:24:41.232082 dockerd[1920]: time="2025-05-27T04:24:41.232006759Z" level=info msg="Completed buildkit initialization" May 27 04:24:41.242632 dockerd[1920]: time="2025-05-27T04:24:41.242560576Z" level=info msg="Daemon has completed initialization" May 27 04:24:41.242930 dockerd[1920]: time="2025-05-27T04:24:41.242834957Z" level=info msg="API listen on /run/docker.sock" May 27 04:24:41.243067 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 04:24:41.758761 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck756693214-merged.mount: Deactivated successfully. May 27 04:24:41.976685 containerd[1585]: time="2025-05-27T04:24:41.976589684Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 27 04:24:42.655059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1411592149.mount: Deactivated successfully. May 27 04:24:43.500962 systemd[1]: systemd-hostnamed.service: Deactivated successfully. May 27 04:24:44.568894 containerd[1585]: time="2025-05-27T04:24:44.568827041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:44.570157 containerd[1585]: time="2025-05-27T04:24:44.570123981Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=28682887" May 27 04:24:44.570974 containerd[1585]: time="2025-05-27T04:24:44.570896631Z" level=info msg="ImageCreate event name:\"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:44.574307 containerd[1585]: time="2025-05-27T04:24:44.574231335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:44.575841 containerd[1585]: time="2025-05-27T04:24:44.575661535Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"28679679\" in 2.598963584s" May 27 04:24:44.576054 containerd[1585]: time="2025-05-27T04:24:44.576019636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:1c20c8797e48698afa3380793df2f1fb260e3209df72d8e864e1bc73af8336e5\"" May 27 04:24:44.578629 containerd[1585]: time="2025-05-27T04:24:44.578595855Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 27 04:24:45.909713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 27 04:24:45.913732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:24:46.148208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:24:46.170213 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 04:24:46.255346 kubelet[2192]: E0527 04:24:46.254963 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 04:24:46.258185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 04:24:46.258477 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 04:24:46.259305 systemd[1]: kubelet.service: Consumed 256ms CPU time, 110.3M memory peak. May 27 04:24:47.363778 containerd[1585]: time="2025-05-27T04:24:47.363709660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:47.365485 containerd[1585]: time="2025-05-27T04:24:47.365415254Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=24779597" May 27 04:24:47.366925 containerd[1585]: time="2025-05-27T04:24:47.366846365Z" level=info msg="ImageCreate event name:\"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:47.371996 containerd[1585]: time="2025-05-27T04:24:47.371908404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:47.374320 containerd[1585]: time="2025-05-27T04:24:47.374256671Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"26267962\" in 2.795521804s" May 27 04:24:47.374320 containerd[1585]: time="2025-05-27T04:24:47.374314747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:4db5364cd5509e0fc8e9f821fbc4b31ed79d4c9ae21809d22030ad67d530a61a\"" May 27 04:24:47.375052 containerd[1585]: time="2025-05-27T04:24:47.375008194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 27 04:24:50.051070 containerd[1585]: time="2025-05-27T04:24:50.049698849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:50.053222 containerd[1585]: time="2025-05-27T04:24:50.053002774Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=19169946" May 27 04:24:50.054692 containerd[1585]: time="2025-05-27T04:24:50.054648154Z" level=info msg="ImageCreate event name:\"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:50.064474 containerd[1585]: time="2025-05-27T04:24:50.064315952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:50.065794 containerd[1585]: time="2025-05-27T04:24:50.065636239Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"20658329\" in 2.690581312s" May 27 04:24:50.065794 containerd[1585]: time="2025-05-27T04:24:50.065680340Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:70a252485ed1f2e8332b6f0a5f8f57443bfbc3c480228f8dcd82ad5ab5cc4000\"" May 27 04:24:50.066319 containerd[1585]: time="2025-05-27T04:24:50.066289838Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 27 04:24:51.931147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181203505.mount: Deactivated successfully. May 27 04:24:52.962573 containerd[1585]: time="2025-05-27T04:24:52.962433730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:52.963798 containerd[1585]: time="2025-05-27T04:24:52.963384937Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=30917864" May 27 04:24:52.965294 containerd[1585]: time="2025-05-27T04:24:52.965212517Z" level=info msg="ImageCreate event name:\"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:52.968544 containerd[1585]: time="2025-05-27T04:24:52.968494132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:52.969503 containerd[1585]: time="2025-05-27T04:24:52.969470349Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"30916875\" in 2.903019633s" May 27 04:24:52.969699 containerd[1585]: time="2025-05-27T04:24:52.969634410Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:608f0c8bf7f9651ca79f170235ea5eefb978a0c1da132e7477a88ad37d171ad3\"" May 27 04:24:52.970501 containerd[1585]: time="2025-05-27T04:24:52.970449058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 04:24:53.902366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476895066.mount: Deactivated successfully. May 27 04:24:55.472442 containerd[1585]: time="2025-05-27T04:24:55.472358502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:55.474926 containerd[1585]: time="2025-05-27T04:24:55.474884911Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" May 27 04:24:55.476415 containerd[1585]: time="2025-05-27T04:24:55.476153319Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:55.480336 containerd[1585]: time="2025-05-27T04:24:55.480299670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:24:55.482620 containerd[1585]: time="2025-05-27T04:24:55.482582216Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.512091877s" May 27 04:24:55.482880 containerd[1585]: time="2025-05-27T04:24:55.482747913Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 27 04:24:55.483719 containerd[1585]: time="2025-05-27T04:24:55.483693982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 04:24:55.975362 update_engine[1574]: I20250527 04:24:55.974299 1574 update_attempter.cc:509] Updating boot flags... May 27 04:24:56.409202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 27 04:24:56.414971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:24:56.630448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:24:56.646076 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 04:24:56.734816 kubelet[2292]: E0527 04:24:56.733596 2292 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 04:24:56.737730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 04:24:56.738811 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 04:24:56.740096 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.4M memory peak. May 27 04:24:56.958828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596656504.mount: Deactivated successfully. May 27 04:24:56.966858 containerd[1585]: time="2025-05-27T04:24:56.965713306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 04:24:56.968735 containerd[1585]: time="2025-05-27T04:24:56.968611755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 27 04:24:56.971426 containerd[1585]: time="2025-05-27T04:24:56.970034082Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 04:24:56.972670 containerd[1585]: time="2025-05-27T04:24:56.972632810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 04:24:56.973881 containerd[1585]: time="2025-05-27T04:24:56.973844604Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.489962701s" May 27 04:24:56.974000 containerd[1585]: time="2025-05-27T04:24:56.973974974Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 27 04:24:56.974827 containerd[1585]: time="2025-05-27T04:24:56.974779330Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 04:24:58.566852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount39080527.mount: Deactivated successfully. May 27 04:25:03.047870 containerd[1585]: time="2025-05-27T04:25:03.047796658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:25:03.049256 containerd[1585]: time="2025-05-27T04:25:03.049218351Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551368" May 27 04:25:03.050031 containerd[1585]: time="2025-05-27T04:25:03.049915147Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:25:03.053459 containerd[1585]: time="2025-05-27T04:25:03.053426571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:25:03.055244 containerd[1585]: time="2025-05-27T04:25:03.054988010Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.080164863s" May 27 04:25:03.055244 containerd[1585]: time="2025-05-27T04:25:03.055034381Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 27 04:25:06.170224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:25:06.171272 systemd[1]: kubelet.service: Consumed 216ms CPU time, 110.4M memory peak. May 27 04:25:06.175450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:25:06.212159 systemd[1]: Reload requested from client PID 2383 ('systemctl') (unit session-11.scope)... May 27 04:25:06.212210 systemd[1]: Reloading... May 27 04:25:06.371921 zram_generator::config[2428]: No configuration found. May 27 04:25:06.542034 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 04:25:06.719118 systemd[1]: Reloading finished in 506 ms. May 27 04:25:06.788005 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:25:06.792614 systemd[1]: kubelet.service: Deactivated successfully. May 27 04:25:06.792999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:25:06.793085 systemd[1]: kubelet.service: Consumed 147ms CPU time, 97.9M memory peak. May 27 04:25:06.795637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:25:06.972096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:25:06.989045 (kubelet)[2497]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 04:25:07.056063 kubelet[2497]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 04:25:07.056063 kubelet[2497]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 04:25:07.056063 kubelet[2497]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 04:25:07.056063 kubelet[2497]: I0527 04:25:07.055745 2497 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 04:25:08.202680 kubelet[2497]: I0527 04:25:08.202599 2497 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 04:25:08.202680 kubelet[2497]: I0527 04:25:08.202672 2497 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 04:25:08.203315 kubelet[2497]: I0527 04:25:08.203174 2497 server.go:954] "Client rotation is on, will bootstrap in background" May 27 04:25:08.248659 kubelet[2497]: E0527 04:25:08.248603 2497 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.19.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:08.251599 kubelet[2497]: I0527 04:25:08.251390 2497 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 04:25:08.277968 kubelet[2497]: I0527 04:25:08.277723 2497 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 04:25:08.286833 kubelet[2497]: I0527 04:25:08.286700 2497 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 04:25:08.289053 kubelet[2497]: I0527 04:25:08.288957 2497 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 04:25:08.289360 kubelet[2497]: I0527 04:25:08.289023 2497 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g11ua.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 04:25:08.289689 kubelet[2497]: I0527 04:25:08.289377 2497 topology_manager.go:138] "Creating topology manager with none policy" May 27 04:25:08.289689 kubelet[2497]: I0527 04:25:08.289411 2497 container_manager_linux.go:304] "Creating device plugin manager" May 27 04:25:08.290826 kubelet[2497]: I0527 04:25:08.290781 2497 state_mem.go:36] "Initialized new in-memory state store" May 27 04:25:08.294791 kubelet[2497]: I0527 04:25:08.294626 2497 kubelet.go:446] "Attempting to sync node with API server" May 27 04:25:08.294791 kubelet[2497]: I0527 04:25:08.294698 2497 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 04:25:08.297642 kubelet[2497]: I0527 04:25:08.297583 2497 kubelet.go:352] "Adding apiserver pod source" May 27 04:25:08.298472 kubelet[2497]: I0527 04:25:08.297912 2497 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 04:25:08.305614 kubelet[2497]: W0527 04:25:08.305508 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.19.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g11ua.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:08.305812 kubelet[2497]: E0527 04:25:08.305645 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.19.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g11ua.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:08.306306 kubelet[2497]: W0527 04:25:08.306250 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.19.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:08.306375 kubelet[2497]: E0527 04:25:08.306321 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.19.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:08.307515 kubelet[2497]: I0527 04:25:08.307307 2497 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 04:25:08.310780 kubelet[2497]: I0527 04:25:08.310747 2497 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 04:25:08.315592 kubelet[2497]: W0527 04:25:08.314499 2497 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 04:25:08.318227 kubelet[2497]: I0527 04:25:08.318197 2497 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 04:25:08.318683 kubelet[2497]: I0527 04:25:08.318653 2497 server.go:1287] "Started kubelet" May 27 04:25:08.324172 kubelet[2497]: I0527 04:25:08.323677 2497 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 04:25:08.330323 kubelet[2497]: I0527 04:25:08.330232 2497 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 04:25:08.336148 kubelet[2497]: E0527 04:25:08.335056 2497 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-g11ua.gb1.brightbox.com\" not found" May 27 04:25:08.343342 kubelet[2497]: I0527 04:25:08.331330 2497 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 04:25:08.343342 kubelet[2497]: I0527 04:25:08.333723 2497 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 04:25:08.343342 kubelet[2497]: I0527 04:25:08.333952 2497 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 04:25:08.343342 kubelet[2497]: I0527 04:25:08.343226 2497 reconciler.go:26] "Reconciler: start to sync state" May 27 04:25:08.347145 kubelet[2497]: I0527 04:25:08.343882 2497 server.go:479] "Adding debug handlers to kubelet server" May 27 04:25:08.348146 kubelet[2497]: I0527 04:25:08.330542 2497 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 04:25:08.348146 kubelet[2497]: W0527 04:25:08.347888 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.19.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:08.348146 kubelet[2497]: E0527 04:25:08.347954 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.19.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:08.348146 kubelet[2497]: E0527 04:25:08.348069 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g11ua.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.66:6443: connect: connection refused" interval="200ms" May 27 04:25:08.356646 kubelet[2497]: I0527 04:25:08.356612 2497 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 04:25:08.360755 kubelet[2497]: I0527 04:25:08.359972 2497 factory.go:221] Registration of the systemd container factory successfully May 27 04:25:08.360755 kubelet[2497]: I0527 04:25:08.360107 2497 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 04:25:08.360755 kubelet[2497]: E0527 04:25:08.349469 2497 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.19.66:6443/api/v1/namespaces/default/events\": dial tcp 10.244.19.66:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-g11ua.gb1.brightbox.com.184347b19112b070 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-g11ua.gb1.brightbox.com,UID:srv-g11ua.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-g11ua.gb1.brightbox.com,},FirstTimestamp:2025-05-27 04:25:08.31858904 +0000 UTC m=+1.324398499,LastTimestamp:2025-05-27 04:25:08.31858904 +0000 UTC m=+1.324398499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-g11ua.gb1.brightbox.com,}" May 27 04:25:08.364022 kubelet[2497]: I0527 04:25:08.363825 2497 factory.go:221] Registration of the containerd container factory successfully May 27 04:25:08.366619 kubelet[2497]: E0527 04:25:08.366589 2497 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 04:25:08.388296 kubelet[2497]: I0527 04:25:08.388237 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 04:25:08.391856 kubelet[2497]: I0527 04:25:08.391826 2497 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 04:25:08.392035 kubelet[2497]: I0527 04:25:08.392014 2497 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 04:25:08.392169 kubelet[2497]: I0527 04:25:08.392149 2497 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 04:25:08.392256 kubelet[2497]: I0527 04:25:08.392240 2497 kubelet.go:2382] "Starting kubelet main sync loop" May 27 04:25:08.392458 kubelet[2497]: E0527 04:25:08.392423 2497 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 04:25:08.396481 kubelet[2497]: W0527 04:25:08.396113 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.19.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:08.396481 kubelet[2497]: E0527 04:25:08.396163 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.19.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:08.397894 kubelet[2497]: I0527 04:25:08.397868 2497 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 04:25:08.397894 kubelet[2497]: I0527 04:25:08.397892 2497 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 04:25:08.398016 kubelet[2497]: I0527 04:25:08.397923 2497 state_mem.go:36] "Initialized new in-memory state store" May 27 04:25:08.399911 kubelet[2497]: I0527 04:25:08.399858 2497 policy_none.go:49] "None policy: Start" May 27 04:25:08.399911 kubelet[2497]: I0527 04:25:08.399911 2497 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 04:25:08.400031 kubelet[2497]: I0527 04:25:08.399962 2497 state_mem.go:35] "Initializing new in-memory state store" May 27 04:25:08.408536 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 04:25:08.427218 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 04:25:08.437224 kubelet[2497]: E0527 04:25:08.437133 2497 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-g11ua.gb1.brightbox.com\" not found" May 27 04:25:08.448128 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 04:25:08.451301 kubelet[2497]: I0527 04:25:08.450822 2497 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 04:25:08.451301 kubelet[2497]: I0527 04:25:08.451131 2497 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 04:25:08.451301 kubelet[2497]: I0527 04:25:08.451156 2497 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 04:25:08.454050 kubelet[2497]: I0527 04:25:08.453964 2497 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 04:25:08.455677 kubelet[2497]: E0527 04:25:08.455653 2497 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 04:25:08.455907 kubelet[2497]: E0527 04:25:08.455874 2497 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-g11ua.gb1.brightbox.com\" not found" May 27 04:25:08.514257 systemd[1]: Created slice kubepods-burstable-pod2009e5ebe02a57b7dd7bac592b2d3404.slice - libcontainer container kubepods-burstable-pod2009e5ebe02a57b7dd7bac592b2d3404.slice. May 27 04:25:08.533751 kubelet[2497]: E0527 04:25:08.533670 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:08.540068 systemd[1]: Created slice kubepods-burstable-pod0fb2c61812d19e7ddaa45e15d1a10775.slice - libcontainer container kubepods-burstable-pod0fb2c61812d19e7ddaa45e15d1a10775.slice. May 27 04:25:08.543842 kubelet[2497]: E0527 04:25:08.543775 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:08.549570 kubelet[2497]: E0527 04:25:08.549457 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g11ua.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.66:6443: connect: connection refused" interval="400ms" May 27 04:25:08.553995 systemd[1]: Created slice kubepods-burstable-pod4285f2bf344049d282081561a0ba84a2.slice - libcontainer container kubepods-burstable-pod4285f2bf344049d282081561a0ba84a2.slice. May 27 04:25:08.556337 kubelet[2497]: I0527 04:25:08.556298 2497 kubelet_node_status.go:75] "Attempting to register node" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:08.557134 kubelet[2497]: E0527 04:25:08.557102 2497 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.19.66:6443/api/v1/nodes\": dial tcp 10.244.19.66:6443: connect: connection refused" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:08.558272 kubelet[2497]: E0527 04:25:08.557992 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:08.645256 kubelet[2497]: I0527 04:25:08.645182 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-k8s-certs\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.645776 kubelet[2497]: I0527 04:25:08.645653 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-kubeconfig\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.645956 kubelet[2497]: I0527 04:25:08.645732 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fb2c61812d19e7ddaa45e15d1a10775-ca-certs\") pod \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" (UID: \"0fb2c61812d19e7ddaa45e15d1a10775\") " pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.646133 kubelet[2497]: I0527 04:25:08.646082 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-ca-certs\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.646217 kubelet[2497]: I0527 04:25:08.646169 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-flexvolume-dir\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.646292 kubelet[2497]: I0527 04:25:08.646245 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fb2c61812d19e7ddaa45e15d1a10775-k8s-certs\") pod \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" (UID: \"0fb2c61812d19e7ddaa45e15d1a10775\") " pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.646353 kubelet[2497]: I0527 04:25:08.646288 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fb2c61812d19e7ddaa45e15d1a10775-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" (UID: \"0fb2c61812d19e7ddaa45e15d1a10775\") " pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.646353 kubelet[2497]: I0527 04:25:08.646328 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.646482 kubelet[2497]: I0527 04:25:08.646359 2497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4285f2bf344049d282081561a0ba84a2-kubeconfig\") pod \"kube-scheduler-srv-g11ua.gb1.brightbox.com\" (UID: \"4285f2bf344049d282081561a0ba84a2\") " pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" May 27 04:25:08.760894 kubelet[2497]: I0527 04:25:08.760770 2497 kubelet_node_status.go:75] "Attempting to register node" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:08.761682 kubelet[2497]: E0527 04:25:08.761644 2497 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.19.66:6443/api/v1/nodes\": dial tcp 10.244.19.66:6443: connect: connection refused" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:08.838091 containerd[1585]: time="2025-05-27T04:25:08.838007864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g11ua.gb1.brightbox.com,Uid:2009e5ebe02a57b7dd7bac592b2d3404,Namespace:kube-system,Attempt:0,}" May 27 04:25:08.845348 containerd[1585]: time="2025-05-27T04:25:08.845171395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g11ua.gb1.brightbox.com,Uid:0fb2c61812d19e7ddaa45e15d1a10775,Namespace:kube-system,Attempt:0,}" May 27 04:25:08.860150 containerd[1585]: time="2025-05-27T04:25:08.859946340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g11ua.gb1.brightbox.com,Uid:4285f2bf344049d282081561a0ba84a2,Namespace:kube-system,Attempt:0,}" May 27 04:25:08.951508 kubelet[2497]: E0527 04:25:08.951454 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g11ua.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.66:6443: connect: connection refused" interval="800ms" May 27 04:25:08.997665 containerd[1585]: time="2025-05-27T04:25:08.997471712Z" level=info msg="connecting to shim 28d710ccd30319aa25e4f169e158e5e0fe565f72aafaf60571d015c4eb43ffd4" address="unix:///run/containerd/s/278d60837b24515e2efc829d8817e535c143596804c020575762db1c03783a6c" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:08.998545 containerd[1585]: time="2025-05-27T04:25:08.998486475Z" level=info msg="connecting to shim d111473326d31cf3173635e5ba42040df3998f354c86c93a924f115b325c3375" address="unix:///run/containerd/s/b72ad555f720c1f89871be5cad016291fde5cef976c59ffc38ee57d22a9f6fab" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:09.003806 containerd[1585]: time="2025-05-27T04:25:09.003692500Z" level=info msg="connecting to shim a67d54635acac390b94079e63a567cc6af32723401d98623ab59ff80cb2ef109" address="unix:///run/containerd/s/33ae4e1d1b95e80f47f65c73c5deb3a06dcfd01f626305b7c3c901f3725e70bb" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:09.139894 systemd[1]: Started cri-containerd-a67d54635acac390b94079e63a567cc6af32723401d98623ab59ff80cb2ef109.scope - libcontainer container a67d54635acac390b94079e63a567cc6af32723401d98623ab59ff80cb2ef109. May 27 04:25:09.154986 systemd[1]: Started cri-containerd-28d710ccd30319aa25e4f169e158e5e0fe565f72aafaf60571d015c4eb43ffd4.scope - libcontainer container 28d710ccd30319aa25e4f169e158e5e0fe565f72aafaf60571d015c4eb43ffd4. May 27 04:25:09.165439 kubelet[2497]: I0527 04:25:09.165330 2497 kubelet_node_status.go:75] "Attempting to register node" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:09.166331 kubelet[2497]: E0527 04:25:09.166215 2497 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.19.66:6443/api/v1/nodes\": dial tcp 10.244.19.66:6443: connect: connection refused" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:09.167178 systemd[1]: Started cri-containerd-d111473326d31cf3173635e5ba42040df3998f354c86c93a924f115b325c3375.scope - libcontainer container d111473326d31cf3173635e5ba42040df3998f354c86c93a924f115b325c3375. May 27 04:25:09.288540 containerd[1585]: time="2025-05-27T04:25:09.288471015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-g11ua.gb1.brightbox.com,Uid:4285f2bf344049d282081561a0ba84a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a67d54635acac390b94079e63a567cc6af32723401d98623ab59ff80cb2ef109\"" May 27 04:25:09.289157 containerd[1585]: time="2025-05-27T04:25:09.288917715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-g11ua.gb1.brightbox.com,Uid:0fb2c61812d19e7ddaa45e15d1a10775,Namespace:kube-system,Attempt:0,} returns sandbox id \"28d710ccd30319aa25e4f169e158e5e0fe565f72aafaf60571d015c4eb43ffd4\"" May 27 04:25:09.298999 kubelet[2497]: W0527 04:25:09.298925 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.19.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:09.299919 kubelet[2497]: E0527 04:25:09.299013 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.19.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:09.303627 containerd[1585]: time="2025-05-27T04:25:09.303015978Z" level=info msg="CreateContainer within sandbox \"a67d54635acac390b94079e63a567cc6af32723401d98623ab59ff80cb2ef109\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 04:25:09.303627 containerd[1585]: time="2025-05-27T04:25:09.303367178Z" level=info msg="CreateContainer within sandbox \"28d710ccd30319aa25e4f169e158e5e0fe565f72aafaf60571d015c4eb43ffd4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 04:25:09.321594 containerd[1585]: time="2025-05-27T04:25:09.321542238Z" level=info msg="Container a36af6f3c90c6c7768257ab201d3165227679d476993218114a7e4802cbc30d0: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:09.322601 containerd[1585]: time="2025-05-27T04:25:09.322536658Z" level=info msg="Container 2c00c12cc0d09a3c923e3f4ee4dddd84eb5c28cbdae2240726b7b90e6acefe3f: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:09.325744 containerd[1585]: time="2025-05-27T04:25:09.325708542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-g11ua.gb1.brightbox.com,Uid:2009e5ebe02a57b7dd7bac592b2d3404,Namespace:kube-system,Attempt:0,} returns sandbox id \"d111473326d31cf3173635e5ba42040df3998f354c86c93a924f115b325c3375\"" May 27 04:25:09.329746 containerd[1585]: time="2025-05-27T04:25:09.329517881Z" level=info msg="CreateContainer within sandbox \"d111473326d31cf3173635e5ba42040df3998f354c86c93a924f115b325c3375\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 04:25:09.331475 containerd[1585]: time="2025-05-27T04:25:09.331440549Z" level=info msg="CreateContainer within sandbox \"28d710ccd30319aa25e4f169e158e5e0fe565f72aafaf60571d015c4eb43ffd4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a36af6f3c90c6c7768257ab201d3165227679d476993218114a7e4802cbc30d0\"" May 27 04:25:09.334079 containerd[1585]: time="2025-05-27T04:25:09.334046406Z" level=info msg="StartContainer for \"a36af6f3c90c6c7768257ab201d3165227679d476993218114a7e4802cbc30d0\"" May 27 04:25:09.335623 containerd[1585]: time="2025-05-27T04:25:09.335576027Z" level=info msg="connecting to shim a36af6f3c90c6c7768257ab201d3165227679d476993218114a7e4802cbc30d0" address="unix:///run/containerd/s/278d60837b24515e2efc829d8817e535c143596804c020575762db1c03783a6c" protocol=ttrpc version=3 May 27 04:25:09.336623 containerd[1585]: time="2025-05-27T04:25:09.336588721Z" level=info msg="CreateContainer within sandbox \"a67d54635acac390b94079e63a567cc6af32723401d98623ab59ff80cb2ef109\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c00c12cc0d09a3c923e3f4ee4dddd84eb5c28cbdae2240726b7b90e6acefe3f\"" May 27 04:25:09.337273 containerd[1585]: time="2025-05-27T04:25:09.337242626Z" level=info msg="StartContainer for \"2c00c12cc0d09a3c923e3f4ee4dddd84eb5c28cbdae2240726b7b90e6acefe3f\"" May 27 04:25:09.340096 containerd[1585]: time="2025-05-27T04:25:09.340039418Z" level=info msg="connecting to shim 2c00c12cc0d09a3c923e3f4ee4dddd84eb5c28cbdae2240726b7b90e6acefe3f" address="unix:///run/containerd/s/33ae4e1d1b95e80f47f65c73c5deb3a06dcfd01f626305b7c3c901f3725e70bb" protocol=ttrpc version=3 May 27 04:25:09.346927 containerd[1585]: time="2025-05-27T04:25:09.346883219Z" level=info msg="Container 8b7ad50b42b42c11b80063ebb16a93ce0f81ecab3eebff7f31a25a180ddd2bb7: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:09.357644 containerd[1585]: time="2025-05-27T04:25:09.357598380Z" level=info msg="CreateContainer within sandbox \"d111473326d31cf3173635e5ba42040df3998f354c86c93a924f115b325c3375\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8b7ad50b42b42c11b80063ebb16a93ce0f81ecab3eebff7f31a25a180ddd2bb7\"" May 27 04:25:09.360422 containerd[1585]: time="2025-05-27T04:25:09.359555639Z" level=info msg="StartContainer for \"8b7ad50b42b42c11b80063ebb16a93ce0f81ecab3eebff7f31a25a180ddd2bb7\"" May 27 04:25:09.361541 containerd[1585]: time="2025-05-27T04:25:09.361384286Z" level=info msg="connecting to shim 8b7ad50b42b42c11b80063ebb16a93ce0f81ecab3eebff7f31a25a180ddd2bb7" address="unix:///run/containerd/s/b72ad555f720c1f89871be5cad016291fde5cef976c59ffc38ee57d22a9f6fab" protocol=ttrpc version=3 May 27 04:25:09.372659 systemd[1]: Started cri-containerd-a36af6f3c90c6c7768257ab201d3165227679d476993218114a7e4802cbc30d0.scope - libcontainer container a36af6f3c90c6c7768257ab201d3165227679d476993218114a7e4802cbc30d0. May 27 04:25:09.384729 systemd[1]: Started cri-containerd-2c00c12cc0d09a3c923e3f4ee4dddd84eb5c28cbdae2240726b7b90e6acefe3f.scope - libcontainer container 2c00c12cc0d09a3c923e3f4ee4dddd84eb5c28cbdae2240726b7b90e6acefe3f. May 27 04:25:09.406791 systemd[1]: Started cri-containerd-8b7ad50b42b42c11b80063ebb16a93ce0f81ecab3eebff7f31a25a180ddd2bb7.scope - libcontainer container 8b7ad50b42b42c11b80063ebb16a93ce0f81ecab3eebff7f31a25a180ddd2bb7. May 27 04:25:09.505898 containerd[1585]: time="2025-05-27T04:25:09.505849461Z" level=info msg="StartContainer for \"a36af6f3c90c6c7768257ab201d3165227679d476993218114a7e4802cbc30d0\" returns successfully" May 27 04:25:09.545884 kubelet[2497]: W0527 04:25:09.545791 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.19.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:09.545884 kubelet[2497]: E0527 04:25:09.545881 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.19.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:09.548510 containerd[1585]: time="2025-05-27T04:25:09.548132672Z" level=info msg="StartContainer for \"8b7ad50b42b42c11b80063ebb16a93ce0f81ecab3eebff7f31a25a180ddd2bb7\" returns successfully" May 27 04:25:09.563331 containerd[1585]: time="2025-05-27T04:25:09.563261607Z" level=info msg="StartContainer for \"2c00c12cc0d09a3c923e3f4ee4dddd84eb5c28cbdae2240726b7b90e6acefe3f\" returns successfully" May 27 04:25:09.753683 kubelet[2497]: E0527 04:25:09.753626 2497 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.19.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-g11ua.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.19.66:6443: connect: connection refused" interval="1.6s" May 27 04:25:09.898290 kubelet[2497]: W0527 04:25:09.898207 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.19.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g11ua.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:09.898559 kubelet[2497]: E0527 04:25:09.898304 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.19.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-g11ua.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:09.901991 kubelet[2497]: W0527 04:25:09.901943 2497 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.19.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.19.66:6443: connect: connection refused May 27 04:25:09.902093 kubelet[2497]: E0527 04:25:09.902004 2497 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.19.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.19.66:6443: connect: connection refused" logger="UnhandledError" May 27 04:25:09.968893 kubelet[2497]: I0527 04:25:09.968852 2497 kubelet_node_status.go:75] "Attempting to register node" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:10.443172 kubelet[2497]: E0527 04:25:10.443134 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:10.452390 kubelet[2497]: E0527 04:25:10.452359 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:10.454730 kubelet[2497]: E0527 04:25:10.454705 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:11.456838 kubelet[2497]: E0527 04:25:11.456792 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:11.457341 kubelet[2497]: E0527 04:25:11.457284 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:11.457700 kubelet[2497]: E0527 04:25:11.457670 2497 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:11.917977 kubelet[2497]: E0527 04:25:11.917925 2497 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-g11ua.gb1.brightbox.com\" not found" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:11.991275 kubelet[2497]: I0527 04:25:11.991212 2497 kubelet_node_status.go:78] "Successfully registered node" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:12.043440 kubelet[2497]: I0527 04:25:12.043350 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.052934 kubelet[2497]: E0527 04:25:12.052888 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.053647 kubelet[2497]: I0527 04:25:12.053450 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.057074 kubelet[2497]: E0527 04:25:12.055822 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-g11ua.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.057255 kubelet[2497]: I0527 04:25:12.057192 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.059990 kubelet[2497]: E0527 04:25:12.059948 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.308809 kubelet[2497]: I0527 04:25:12.308159 2497 apiserver.go:52] "Watching apiserver" May 27 04:25:12.343856 kubelet[2497]: I0527 04:25:12.343779 2497 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 04:25:12.455900 kubelet[2497]: I0527 04:25:12.455833 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.456351 kubelet[2497]: I0527 04:25:12.455844 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.456351 kubelet[2497]: I0527 04:25:12.456297 2497 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.459268 kubelet[2497]: E0527 04:25:12.459121 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-g11ua.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.460542 kubelet[2497]: E0527 04:25:12.460480 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:12.462218 kubelet[2497]: E0527 04:25:12.462156 2497 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:13.981964 systemd[1]: Reload requested from client PID 2774 ('systemctl') (unit session-11.scope)... May 27 04:25:13.982000 systemd[1]: Reloading... May 27 04:25:14.147475 zram_generator::config[2831]: No configuration found. May 27 04:25:14.324370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 04:25:14.526294 systemd[1]: Reloading finished in 543 ms. May 27 04:25:14.562189 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:25:14.578128 systemd[1]: kubelet.service: Deactivated successfully. May 27 04:25:14.578784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:25:14.579004 systemd[1]: kubelet.service: Consumed 1.866s CPU time, 128M memory peak. May 27 04:25:14.584740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 04:25:14.872193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 04:25:14.884717 (kubelet)[2884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 04:25:14.985148 kubelet[2884]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 04:25:14.985148 kubelet[2884]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 04:25:14.985148 kubelet[2884]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 04:25:14.985148 kubelet[2884]: I0527 04:25:14.984821 2884 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 04:25:15.001204 kubelet[2884]: I0527 04:25:15.001130 2884 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 04:25:15.001204 kubelet[2884]: I0527 04:25:15.002435 2884 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 04:25:15.001204 kubelet[2884]: I0527 04:25:15.002811 2884 server.go:954] "Client rotation is on, will bootstrap in background" May 27 04:25:15.009904 kubelet[2884]: I0527 04:25:15.009861 2884 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 04:25:15.014281 kubelet[2884]: I0527 04:25:15.014247 2884 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 04:25:15.025082 kubelet[2884]: I0527 04:25:15.025050 2884 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 04:25:15.032257 kubelet[2884]: I0527 04:25:15.032200 2884 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 04:25:15.035912 kubelet[2884]: I0527 04:25:15.035830 2884 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 04:25:15.036286 kubelet[2884]: I0527 04:25:15.036048 2884 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-g11ua.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 04:25:15.036683 kubelet[2884]: I0527 04:25:15.036547 2884 topology_manager.go:138] "Creating topology manager with none policy" May 27 04:25:15.036683 kubelet[2884]: I0527 04:25:15.036571 2884 container_manager_linux.go:304] "Creating device plugin manager" May 27 04:25:15.040508 kubelet[2884]: I0527 04:25:15.039493 2884 state_mem.go:36] "Initialized new in-memory state store" May 27 04:25:15.040932 kubelet[2884]: I0527 04:25:15.040887 2884 kubelet.go:446] "Attempting to sync node with API server" May 27 04:25:15.041422 kubelet[2884]: I0527 04:25:15.041388 2884 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 04:25:15.041704 kubelet[2884]: I0527 04:25:15.041588 2884 kubelet.go:352] "Adding apiserver pod source" May 27 04:25:15.041704 kubelet[2884]: I0527 04:25:15.041613 2884 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 04:25:15.046300 kubelet[2884]: I0527 04:25:15.046088 2884 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 04:25:15.050435 kubelet[2884]: I0527 04:25:15.050183 2884 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 04:25:15.051206 kubelet[2884]: I0527 04:25:15.051173 2884 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 04:25:15.053177 kubelet[2884]: I0527 04:25:15.051957 2884 server.go:1287] "Started kubelet" May 27 04:25:15.060004 kubelet[2884]: I0527 04:25:15.059783 2884 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 04:25:15.077658 kubelet[2884]: I0527 04:25:15.077505 2884 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 04:25:15.080699 kubelet[2884]: I0527 04:25:15.080651 2884 server.go:479] "Adding debug handlers to kubelet server" May 27 04:25:15.085531 sudo[2899]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 04:25:15.086711 sudo[2899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 04:25:15.087722 kubelet[2884]: I0527 04:25:15.087633 2884 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 04:25:15.090045 kubelet[2884]: I0527 04:25:15.089884 2884 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 04:25:15.093986 kubelet[2884]: I0527 04:25:15.093928 2884 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 04:25:15.097607 kubelet[2884]: I0527 04:25:15.097467 2884 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 04:25:15.098232 kubelet[2884]: I0527 04:25:15.098159 2884 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 04:25:15.104984 kubelet[2884]: I0527 04:25:15.104919 2884 reconciler.go:26] "Reconciler: start to sync state" May 27 04:25:15.107424 kubelet[2884]: I0527 04:25:15.107249 2884 factory.go:221] Registration of the systemd container factory successfully May 27 04:25:15.108317 kubelet[2884]: I0527 04:25:15.107387 2884 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 04:25:15.116841 kubelet[2884]: E0527 04:25:15.116736 2884 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 04:25:15.125614 kubelet[2884]: I0527 04:25:15.125330 2884 factory.go:221] Registration of the containerd container factory successfully May 27 04:25:15.141275 kubelet[2884]: I0527 04:25:15.141194 2884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 04:25:15.143467 kubelet[2884]: I0527 04:25:15.143001 2884 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 04:25:15.143467 kubelet[2884]: I0527 04:25:15.143047 2884 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 04:25:15.143467 kubelet[2884]: I0527 04:25:15.143082 2884 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 04:25:15.143467 kubelet[2884]: I0527 04:25:15.143094 2884 kubelet.go:2382] "Starting kubelet main sync loop" May 27 04:25:15.143467 kubelet[2884]: E0527 04:25:15.143157 2884 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.240820 2884 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.240848 2884 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.240877 2884 state_mem.go:36] "Initialized new in-memory state store" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.241113 2884 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.241131 2884 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.241157 2884 policy_none.go:49] "None policy: Start" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.241190 2884 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.241212 2884 state_mem.go:35] "Initializing new in-memory state store" May 27 04:25:15.241445 kubelet[2884]: I0527 04:25:15.241379 2884 state_mem.go:75] "Updated machine memory state" May 27 04:25:15.243232 kubelet[2884]: E0527 04:25:15.243208 2884 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 04:25:15.250055 kubelet[2884]: I0527 04:25:15.250020 2884 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 04:25:15.250706 kubelet[2884]: I0527 04:25:15.250684 2884 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 04:25:15.251309 kubelet[2884]: I0527 04:25:15.251249 2884 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 04:25:15.255119 kubelet[2884]: I0527 04:25:15.254967 2884 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 04:25:15.270455 kubelet[2884]: E0527 04:25:15.270333 2884 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 04:25:15.394494 kubelet[2884]: I0527 04:25:15.393871 2884 kubelet_node_status.go:75] "Attempting to register node" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:15.410778 kubelet[2884]: I0527 04:25:15.410621 2884 kubelet_node_status.go:124] "Node was previously registered" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:15.411820 kubelet[2884]: I0527 04:25:15.411800 2884 kubelet_node_status.go:78] "Successfully registered node" node="srv-g11ua.gb1.brightbox.com" May 27 04:25:15.445753 kubelet[2884]: I0527 04:25:15.444453 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.448000 kubelet[2884]: I0527 04:25:15.447359 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.448150 kubelet[2884]: I0527 04:25:15.448124 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.456834 kubelet[2884]: W0527 04:25:15.456799 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 04:25:15.457261 kubelet[2884]: W0527 04:25:15.457019 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 04:25:15.460311 kubelet[2884]: W0527 04:25:15.460286 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 04:25:15.509114 kubelet[2884]: I0527 04:25:15.509056 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fb2c61812d19e7ddaa45e15d1a10775-ca-certs\") pod \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" (UID: \"0fb2c61812d19e7ddaa45e15d1a10775\") " pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.509114 kubelet[2884]: I0527 04:25:15.509112 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fb2c61812d19e7ddaa45e15d1a10775-usr-share-ca-certificates\") pod \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" (UID: \"0fb2c61812d19e7ddaa45e15d1a10775\") " pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.509367 kubelet[2884]: I0527 04:25:15.509148 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-k8s-certs\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.509367 kubelet[2884]: I0527 04:25:15.509175 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-kubeconfig\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.509367 kubelet[2884]: I0527 04:25:15.509216 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fb2c61812d19e7ddaa45e15d1a10775-k8s-certs\") pod \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" (UID: \"0fb2c61812d19e7ddaa45e15d1a10775\") " pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.509367 kubelet[2884]: I0527 04:25:15.509246 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-ca-certs\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.509367 kubelet[2884]: I0527 04:25:15.509274 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-flexvolume-dir\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.510097 kubelet[2884]: I0527 04:25:15.509300 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2009e5ebe02a57b7dd7bac592b2d3404-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" (UID: \"2009e5ebe02a57b7dd7bac592b2d3404\") " pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.510097 kubelet[2884]: I0527 04:25:15.509326 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4285f2bf344049d282081561a0ba84a2-kubeconfig\") pod \"kube-scheduler-srv-g11ua.gb1.brightbox.com\" (UID: \"4285f2bf344049d282081561a0ba84a2\") " pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" May 27 04:25:15.876851 sudo[2899]: pam_unix(sudo:session): session closed for user root May 27 04:25:16.044454 kubelet[2884]: I0527 04:25:16.043909 2884 apiserver.go:52] "Watching apiserver" May 27 04:25:16.098560 kubelet[2884]: I0527 04:25:16.098465 2884 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 04:25:16.202868 kubelet[2884]: I0527 04:25:16.202328 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:16.202868 kubelet[2884]: I0527 04:25:16.202757 2884 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:16.216377 kubelet[2884]: W0527 04:25:16.216277 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 04:25:16.216999 kubelet[2884]: E0527 04:25:16.216918 2884 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-g11ua.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" May 27 04:25:16.218350 kubelet[2884]: W0527 04:25:16.218328 2884 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 27 04:25:16.218631 kubelet[2884]: E0527 04:25:16.218598 2884 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-g11ua.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" May 27 04:25:16.244716 kubelet[2884]: I0527 04:25:16.244533 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-g11ua.gb1.brightbox.com" podStartSLOduration=1.244500768 podStartE2EDuration="1.244500768s" podCreationTimestamp="2025-05-27 04:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:25:16.242927374 +0000 UTC m=+1.350708975" watchObservedRunningTime="2025-05-27 04:25:16.244500768 +0000 UTC m=+1.352282347" May 27 04:25:16.273563 kubelet[2884]: I0527 04:25:16.272389 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-g11ua.gb1.brightbox.com" podStartSLOduration=1.272249972 podStartE2EDuration="1.272249972s" podCreationTimestamp="2025-05-27 04:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:25:16.257164323 +0000 UTC m=+1.364945917" watchObservedRunningTime="2025-05-27 04:25:16.272249972 +0000 UTC m=+1.380031552" May 27 04:25:16.295733 kubelet[2884]: I0527 04:25:16.295274 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-g11ua.gb1.brightbox.com" podStartSLOduration=1.295250046 podStartE2EDuration="1.295250046s" podCreationTimestamp="2025-05-27 04:25:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:25:16.273415625 +0000 UTC m=+1.381197206" watchObservedRunningTime="2025-05-27 04:25:16.295250046 +0000 UTC m=+1.403031625" May 27 04:25:17.759156 sudo[1903]: pam_unix(sudo:session): session closed for user root May 27 04:25:17.903612 sshd[1902]: Connection closed by 139.178.68.195 port 40028 May 27 04:25:17.904625 sshd-session[1900]: pam_unix(sshd:session): session closed for user core May 27 04:25:17.912047 systemd[1]: sshd@8-10.244.19.66:22-139.178.68.195:40028.service: Deactivated successfully. May 27 04:25:17.916872 systemd[1]: session-11.scope: Deactivated successfully. May 27 04:25:17.917346 systemd[1]: session-11.scope: Consumed 5.822s CPU time, 212.7M memory peak. May 27 04:25:17.920781 systemd-logind[1573]: Session 11 logged out. Waiting for processes to exit. May 27 04:25:17.923045 systemd-logind[1573]: Removed session 11. May 27 04:25:18.812012 kubelet[2884]: I0527 04:25:18.811962 2884 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 04:25:18.813217 containerd[1585]: time="2025-05-27T04:25:18.812789626Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 04:25:18.814350 kubelet[2884]: I0527 04:25:18.813863 2884 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 04:25:19.784478 systemd[1]: Created slice kubepods-besteffort-poddedbfff6_61ad_45b5_a930_b7727c1da19a.slice - libcontainer container kubepods-besteffort-poddedbfff6_61ad_45b5_a930_b7727c1da19a.slice. May 27 04:25:19.804690 systemd[1]: Created slice kubepods-burstable-pod4c0f3c6f_4c94_4778_94de_7f5a7f1a3e42.slice - libcontainer container kubepods-burstable-pod4c0f3c6f_4c94_4778_94de_7f5a7f1a3e42.slice. May 27 04:25:19.834196 kubelet[2884]: I0527 04:25:19.834143 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-etc-cni-netd\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.834821 kubelet[2884]: I0527 04:25:19.834376 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-clustermesh-secrets\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.834821 kubelet[2884]: I0527 04:25:19.834531 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dedbfff6-61ad-45b5-a930-b7727c1da19a-kube-proxy\") pod \"kube-proxy-lpj8b\" (UID: \"dedbfff6-61ad-45b5-a930-b7727c1da19a\") " pod="kube-system/kube-proxy-lpj8b" May 27 04:25:19.834821 kubelet[2884]: I0527 04:25:19.834761 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-net\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.834966 kubelet[2884]: I0527 04:25:19.834942 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-bpf-maps\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.835059 kubelet[2884]: I0527 04:25:19.835024 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cni-path\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.835250 kubelet[2884]: I0527 04:25:19.835217 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-kernel\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.835472 kubelet[2884]: I0527 04:25:19.835431 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hubble-tls\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.835612 kubelet[2884]: I0527 04:25:19.835574 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-cgroup\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.836010 kubelet[2884]: I0527 04:25:19.835751 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn6g6\" (UniqueName: \"kubernetes.io/projected/dedbfff6-61ad-45b5-a930-b7727c1da19a-kube-api-access-mn6g6\") pod \"kube-proxy-lpj8b\" (UID: \"dedbfff6-61ad-45b5-a930-b7727c1da19a\") " pod="kube-system/kube-proxy-lpj8b" May 27 04:25:19.836010 kubelet[2884]: I0527 04:25:19.835910 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-run\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.836010 kubelet[2884]: I0527 04:25:19.835967 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hostproc\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.836010 kubelet[2884]: I0527 04:25:19.835995 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-lib-modules\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.836205 kubelet[2884]: I0527 04:25:19.836021 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dedbfff6-61ad-45b5-a930-b7727c1da19a-xtables-lock\") pod \"kube-proxy-lpj8b\" (UID: \"dedbfff6-61ad-45b5-a930-b7727c1da19a\") " pod="kube-system/kube-proxy-lpj8b" May 27 04:25:19.836205 kubelet[2884]: I0527 04:25:19.836048 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v664j\" (UniqueName: \"kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-kube-api-access-v664j\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.836205 kubelet[2884]: I0527 04:25:19.836087 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-xtables-lock\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.836205 kubelet[2884]: I0527 04:25:19.836123 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-config-path\") pod \"cilium-qq6x9\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " pod="kube-system/cilium-qq6x9" May 27 04:25:19.836205 kubelet[2884]: I0527 04:25:19.836178 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dedbfff6-61ad-45b5-a930-b7727c1da19a-lib-modules\") pod \"kube-proxy-lpj8b\" (UID: \"dedbfff6-61ad-45b5-a930-b7727c1da19a\") " pod="kube-system/kube-proxy-lpj8b" May 27 04:25:19.921467 systemd[1]: Created slice kubepods-besteffort-pod277c45c8_5acc_4d73_a9b9_4e89f7655c63.slice - libcontainer container kubepods-besteffort-pod277c45c8_5acc_4d73_a9b9_4e89f7655c63.slice. May 27 04:25:19.937592 kubelet[2884]: I0527 04:25:19.937380 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/277c45c8-5acc-4d73-a9b9-4e89f7655c63-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dkmw2\" (UID: \"277c45c8-5acc-4d73-a9b9-4e89f7655c63\") " pod="kube-system/cilium-operator-6c4d7847fc-dkmw2" May 27 04:25:19.939162 kubelet[2884]: I0527 04:25:19.938851 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pqtd\" (UniqueName: \"kubernetes.io/projected/277c45c8-5acc-4d73-a9b9-4e89f7655c63-kube-api-access-5pqtd\") pod \"cilium-operator-6c4d7847fc-dkmw2\" (UID: \"277c45c8-5acc-4d73-a9b9-4e89f7655c63\") " pod="kube-system/cilium-operator-6c4d7847fc-dkmw2" May 27 04:25:20.097258 containerd[1585]: time="2025-05-27T04:25:20.097096846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lpj8b,Uid:dedbfff6-61ad-45b5-a930-b7727c1da19a,Namespace:kube-system,Attempt:0,}" May 27 04:25:20.111131 containerd[1585]: time="2025-05-27T04:25:20.110971444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qq6x9,Uid:4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42,Namespace:kube-system,Attempt:0,}" May 27 04:25:20.124639 containerd[1585]: time="2025-05-27T04:25:20.124581084Z" level=info msg="connecting to shim 08c2e5456ccbf1a527dbff45cc0e76c4b13acefdd4bcb618608f831a45cb1814" address="unix:///run/containerd/s/770208ccd76f85405fe685808eae88a1bbe3225950c541d25911a4978dc3b57b" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:20.136963 containerd[1585]: time="2025-05-27T04:25:20.136874004Z" level=info msg="connecting to shim 4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686" address="unix:///run/containerd/s/e31e875f511ce2f5967bd19bb31a91fc263e3844e43b47ea13de3ec0191e5f18" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:20.166633 systemd[1]: Started cri-containerd-08c2e5456ccbf1a527dbff45cc0e76c4b13acefdd4bcb618608f831a45cb1814.scope - libcontainer container 08c2e5456ccbf1a527dbff45cc0e76c4b13acefdd4bcb618608f831a45cb1814. May 27 04:25:20.190635 systemd[1]: Started cri-containerd-4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686.scope - libcontainer container 4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686. May 27 04:25:20.228835 containerd[1585]: time="2025-05-27T04:25:20.227368559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dkmw2,Uid:277c45c8-5acc-4d73-a9b9-4e89f7655c63,Namespace:kube-system,Attempt:0,}" May 27 04:25:20.246591 containerd[1585]: time="2025-05-27T04:25:20.246515538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lpj8b,Uid:dedbfff6-61ad-45b5-a930-b7727c1da19a,Namespace:kube-system,Attempt:0,} returns sandbox id \"08c2e5456ccbf1a527dbff45cc0e76c4b13acefdd4bcb618608f831a45cb1814\"" May 27 04:25:20.260238 containerd[1585]: time="2025-05-27T04:25:20.259760224Z" level=info msg="CreateContainer within sandbox \"08c2e5456ccbf1a527dbff45cc0e76c4b13acefdd4bcb618608f831a45cb1814\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 04:25:20.271666 containerd[1585]: time="2025-05-27T04:25:20.271495542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qq6x9,Uid:4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\"" May 27 04:25:20.276814 containerd[1585]: time="2025-05-27T04:25:20.276364680Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 04:25:20.287638 containerd[1585]: time="2025-05-27T04:25:20.287591336Z" level=info msg="connecting to shim c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97" address="unix:///run/containerd/s/35ee8277c7ce7c32e8fb3adf4439914a051c3de0e99a6a6c140c83195b89431a" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:20.289819 containerd[1585]: time="2025-05-27T04:25:20.289788740Z" level=info msg="Container d624660857cbec6dbd9e566519e5ddc044986ca0b6d1b826157f2014d1faf282: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:20.302878 containerd[1585]: time="2025-05-27T04:25:20.302826150Z" level=info msg="CreateContainer within sandbox \"08c2e5456ccbf1a527dbff45cc0e76c4b13acefdd4bcb618608f831a45cb1814\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d624660857cbec6dbd9e566519e5ddc044986ca0b6d1b826157f2014d1faf282\"" May 27 04:25:20.304446 containerd[1585]: time="2025-05-27T04:25:20.304179598Z" level=info msg="StartContainer for \"d624660857cbec6dbd9e566519e5ddc044986ca0b6d1b826157f2014d1faf282\"" May 27 04:25:20.309178 containerd[1585]: time="2025-05-27T04:25:20.309137780Z" level=info msg="connecting to shim d624660857cbec6dbd9e566519e5ddc044986ca0b6d1b826157f2014d1faf282" address="unix:///run/containerd/s/770208ccd76f85405fe685808eae88a1bbe3225950c541d25911a4978dc3b57b" protocol=ttrpc version=3 May 27 04:25:20.344666 systemd[1]: Started cri-containerd-c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97.scope - libcontainer container c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97. May 27 04:25:20.351132 systemd[1]: Started cri-containerd-d624660857cbec6dbd9e566519e5ddc044986ca0b6d1b826157f2014d1faf282.scope - libcontainer container d624660857cbec6dbd9e566519e5ddc044986ca0b6d1b826157f2014d1faf282. May 27 04:25:20.445616 containerd[1585]: time="2025-05-27T04:25:20.445552800Z" level=info msg="StartContainer for \"d624660857cbec6dbd9e566519e5ddc044986ca0b6d1b826157f2014d1faf282\" returns successfully" May 27 04:25:20.446947 containerd[1585]: time="2025-05-27T04:25:20.446908684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dkmw2,Uid:277c45c8-5acc-4d73-a9b9-4e89f7655c63,Namespace:kube-system,Attempt:0,} returns sandbox id \"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\"" May 27 04:25:21.714627 kubelet[2884]: I0527 04:25:21.713883 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lpj8b" podStartSLOduration=2.713860575 podStartE2EDuration="2.713860575s" podCreationTimestamp="2025-05-27 04:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:25:21.281337491 +0000 UTC m=+6.389119083" watchObservedRunningTime="2025-05-27 04:25:21.713860575 +0000 UTC m=+6.821642142" May 27 04:25:27.726337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount523786505.mount: Deactivated successfully. May 27 04:25:31.111340 containerd[1585]: time="2025-05-27T04:25:31.110874711Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:25:31.112370 containerd[1585]: time="2025-05-27T04:25:31.112317313Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 27 04:25:31.115536 containerd[1585]: time="2025-05-27T04:25:31.115465595Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:25:31.116477 containerd[1585]: time="2025-05-27T04:25:31.116440022Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.839999015s" May 27 04:25:31.116662 containerd[1585]: time="2025-05-27T04:25:31.116595704Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 27 04:25:31.119999 containerd[1585]: time="2025-05-27T04:25:31.119904717Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 04:25:31.122218 containerd[1585]: time="2025-05-27T04:25:31.122183917Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 04:25:31.145057 containerd[1585]: time="2025-05-27T04:25:31.144996530Z" level=info msg="Container 6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:31.145961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854064351.mount: Deactivated successfully. May 27 04:25:31.149697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3737567152.mount: Deactivated successfully. May 27 04:25:31.162565 containerd[1585]: time="2025-05-27T04:25:31.162477921Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\"" May 27 04:25:31.163619 containerd[1585]: time="2025-05-27T04:25:31.163587455Z" level=info msg="StartContainer for \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\"" May 27 04:25:31.168121 containerd[1585]: time="2025-05-27T04:25:31.167954561Z" level=info msg="connecting to shim 6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480" address="unix:///run/containerd/s/e31e875f511ce2f5967bd19bb31a91fc263e3844e43b47ea13de3ec0191e5f18" protocol=ttrpc version=3 May 27 04:25:31.207467 systemd[1]: Started cri-containerd-6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480.scope - libcontainer container 6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480. May 27 04:25:31.259423 containerd[1585]: time="2025-05-27T04:25:31.259289622Z" level=info msg="StartContainer for \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" returns successfully" May 27 04:25:31.291850 systemd[1]: cri-containerd-6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480.scope: Deactivated successfully. May 27 04:25:31.360167 containerd[1585]: time="2025-05-27T04:25:31.360088917Z" level=info msg="received exit event container_id:\"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" id:\"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" pid:3298 exited_at:{seconds:1748319931 nanos:293792607}" May 27 04:25:31.361074 containerd[1585]: time="2025-05-27T04:25:31.360914287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" id:\"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" pid:3298 exited_at:{seconds:1748319931 nanos:293792607}" May 27 04:25:32.141801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480-rootfs.mount: Deactivated successfully. May 27 04:25:32.284681 containerd[1585]: time="2025-05-27T04:25:32.283825941Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 04:25:32.299376 containerd[1585]: time="2025-05-27T04:25:32.299203035Z" level=info msg="Container ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:32.310325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1529226033.mount: Deactivated successfully. May 27 04:25:32.314828 containerd[1585]: time="2025-05-27T04:25:32.314776819Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\"" May 27 04:25:32.316863 containerd[1585]: time="2025-05-27T04:25:32.316794545Z" level=info msg="StartContainer for \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\"" May 27 04:25:32.321188 containerd[1585]: time="2025-05-27T04:25:32.320103426Z" level=info msg="connecting to shim ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e" address="unix:///run/containerd/s/e31e875f511ce2f5967bd19bb31a91fc263e3844e43b47ea13de3ec0191e5f18" protocol=ttrpc version=3 May 27 04:25:32.361711 systemd[1]: Started cri-containerd-ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e.scope - libcontainer container ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e. May 27 04:25:32.406265 containerd[1585]: time="2025-05-27T04:25:32.406097938Z" level=info msg="StartContainer for \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" returns successfully" May 27 04:25:32.427324 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 04:25:32.427727 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 04:25:32.429832 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 04:25:32.433416 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 04:25:32.437332 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 04:25:32.437973 systemd[1]: cri-containerd-ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e.scope: Deactivated successfully. May 27 04:25:32.456531 containerd[1585]: time="2025-05-27T04:25:32.455886352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" id:\"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" pid:3343 exited_at:{seconds:1748319932 nanos:443532040}" May 27 04:25:32.457163 containerd[1585]: time="2025-05-27T04:25:32.457124045Z" level=info msg="received exit event container_id:\"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" id:\"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" pid:3343 exited_at:{seconds:1748319932 nanos:443532040}" May 27 04:25:32.481733 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 04:25:33.142803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e-rootfs.mount: Deactivated successfully. May 27 04:25:33.238819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872512902.mount: Deactivated successfully. May 27 04:25:33.301870 containerd[1585]: time="2025-05-27T04:25:33.300517670Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 04:25:33.363764 containerd[1585]: time="2025-05-27T04:25:33.363697932Z" level=info msg="Container 632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:33.384830 containerd[1585]: time="2025-05-27T04:25:33.384759086Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\"" May 27 04:25:33.388388 containerd[1585]: time="2025-05-27T04:25:33.388319267Z" level=info msg="StartContainer for \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\"" May 27 04:25:33.397909 containerd[1585]: time="2025-05-27T04:25:33.397414736Z" level=info msg="connecting to shim 632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f" address="unix:///run/containerd/s/e31e875f511ce2f5967bd19bb31a91fc263e3844e43b47ea13de3ec0191e5f18" protocol=ttrpc version=3 May 27 04:25:33.444746 systemd[1]: Started cri-containerd-632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f.scope - libcontainer container 632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f. May 27 04:25:33.547723 systemd[1]: cri-containerd-632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f.scope: Deactivated successfully. May 27 04:25:33.548196 systemd[1]: cri-containerd-632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f.scope: Consumed 45ms CPU time, 4.4M memory peak, 1M read from disk. May 27 04:25:33.549697 containerd[1585]: time="2025-05-27T04:25:33.549645598Z" level=info msg="StartContainer for \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" returns successfully" May 27 04:25:33.551924 containerd[1585]: time="2025-05-27T04:25:33.551881039Z" level=info msg="received exit event container_id:\"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" id:\"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" pid:3404 exited_at:{seconds:1748319933 nanos:551204678}" May 27 04:25:33.554239 containerd[1585]: time="2025-05-27T04:25:33.552237063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" id:\"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" pid:3404 exited_at:{seconds:1748319933 nanos:551204678}" May 27 04:25:34.294242 containerd[1585]: time="2025-05-27T04:25:34.294191302Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:25:34.299137 containerd[1585]: time="2025-05-27T04:25:34.298288528Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 27 04:25:34.300015 containerd[1585]: time="2025-05-27T04:25:34.299733655Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 04:25:34.304289 containerd[1585]: time="2025-05-27T04:25:34.304224563Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.184274214s" May 27 04:25:34.304289 containerd[1585]: time="2025-05-27T04:25:34.304281790Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 27 04:25:34.311554 containerd[1585]: time="2025-05-27T04:25:34.311509273Z" level=info msg="CreateContainer within sandbox \"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 04:25:34.313541 containerd[1585]: time="2025-05-27T04:25:34.313237699Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 04:25:34.334449 containerd[1585]: time="2025-05-27T04:25:34.333011743Z" level=info msg="Container 95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:34.353908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount91822665.mount: Deactivated successfully. May 27 04:25:34.359748 containerd[1585]: time="2025-05-27T04:25:34.359097840Z" level=info msg="Container d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:34.364851 containerd[1585]: time="2025-05-27T04:25:34.364794542Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\"" May 27 04:25:34.366233 containerd[1585]: time="2025-05-27T04:25:34.366199782Z" level=info msg="StartContainer for \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\"" May 27 04:25:34.368521 containerd[1585]: time="2025-05-27T04:25:34.368461721Z" level=info msg="connecting to shim 95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f" address="unix:///run/containerd/s/e31e875f511ce2f5967bd19bb31a91fc263e3844e43b47ea13de3ec0191e5f18" protocol=ttrpc version=3 May 27 04:25:34.379834 containerd[1585]: time="2025-05-27T04:25:34.379538298Z" level=info msg="CreateContainer within sandbox \"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\"" May 27 04:25:34.381220 containerd[1585]: time="2025-05-27T04:25:34.381167836Z" level=info msg="StartContainer for \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\"" May 27 04:25:34.384703 containerd[1585]: time="2025-05-27T04:25:34.384598007Z" level=info msg="connecting to shim d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9" address="unix:///run/containerd/s/35ee8277c7ce7c32e8fb3adf4439914a051c3de0e99a6a6c140c83195b89431a" protocol=ttrpc version=3 May 27 04:25:34.410606 systemd[1]: Started cri-containerd-95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f.scope - libcontainer container 95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f. May 27 04:25:34.430641 systemd[1]: Started cri-containerd-d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9.scope - libcontainer container d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9. May 27 04:25:34.471796 systemd[1]: cri-containerd-95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f.scope: Deactivated successfully. May 27 04:25:34.475866 containerd[1585]: time="2025-05-27T04:25:34.474253910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" id:\"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" pid:3460 exited_at:{seconds:1748319934 nanos:471236516}" May 27 04:25:34.476333 containerd[1585]: time="2025-05-27T04:25:34.474863062Z" level=info msg="received exit event container_id:\"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" id:\"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" pid:3460 exited_at:{seconds:1748319934 nanos:471236516}" May 27 04:25:34.479657 containerd[1585]: time="2025-05-27T04:25:34.479604229Z" level=info msg="StartContainer for \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" returns successfully" May 27 04:25:34.512507 containerd[1585]: time="2025-05-27T04:25:34.512060750Z" level=info msg="StartContainer for \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" returns successfully" May 27 04:25:35.330944 containerd[1585]: time="2025-05-27T04:25:35.330836391Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 04:25:35.349717 containerd[1585]: time="2025-05-27T04:25:35.349134746Z" level=info msg="Container f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:35.405380 containerd[1585]: time="2025-05-27T04:25:35.405021776Z" level=info msg="CreateContainer within sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\"" May 27 04:25:35.407273 kubelet[2884]: I0527 04:25:35.406948 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dkmw2" podStartSLOduration=2.54774323 podStartE2EDuration="16.406887156s" podCreationTimestamp="2025-05-27 04:25:19 +0000 UTC" firstStartedPulling="2025-05-27 04:25:20.450077477 +0000 UTC m=+5.557859042" lastFinishedPulling="2025-05-27 04:25:34.309221403 +0000 UTC m=+19.417002968" observedRunningTime="2025-05-27 04:25:35.406141183 +0000 UTC m=+20.513922787" watchObservedRunningTime="2025-05-27 04:25:35.406887156 +0000 UTC m=+20.514668729" May 27 04:25:35.411440 containerd[1585]: time="2025-05-27T04:25:35.409478387Z" level=info msg="StartContainer for \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\"" May 27 04:25:35.412145 containerd[1585]: time="2025-05-27T04:25:35.412078391Z" level=info msg="connecting to shim f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c" address="unix:///run/containerd/s/e31e875f511ce2f5967bd19bb31a91fc263e3844e43b47ea13de3ec0191e5f18" protocol=ttrpc version=3 May 27 04:25:35.468639 systemd[1]: Started cri-containerd-f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c.scope - libcontainer container f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c. May 27 04:25:35.617953 containerd[1585]: time="2025-05-27T04:25:35.617441973Z" level=info msg="StartContainer for \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" returns successfully" May 27 04:25:35.895306 containerd[1585]: time="2025-05-27T04:25:35.895005355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" id:\"f903a700e699a2ce5691968f535ce6da7326d8af87528d0cbffdd1d82a942b70\" pid:3548 exited_at:{seconds:1748319935 nanos:894605612}" May 27 04:25:35.920698 kubelet[2884]: I0527 04:25:35.920658 2884 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 04:25:36.016915 systemd[1]: Created slice kubepods-burstable-pod46a73350_ea47_4666_ba31_b64ad4cfdfaa.slice - libcontainer container kubepods-burstable-pod46a73350_ea47_4666_ba31_b64ad4cfdfaa.slice. May 27 04:25:36.030886 systemd[1]: Created slice kubepods-burstable-pod3f598ecf_20f4_49a3_be4b_f1046457ac3c.slice - libcontainer container kubepods-burstable-pod3f598ecf_20f4_49a3_be4b_f1046457ac3c.slice. May 27 04:25:36.077645 kubelet[2884]: I0527 04:25:36.077587 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9mxk\" (UniqueName: \"kubernetes.io/projected/3f598ecf-20f4-49a3-be4b-f1046457ac3c-kube-api-access-z9mxk\") pod \"coredns-668d6bf9bc-rnb6r\" (UID: \"3f598ecf-20f4-49a3-be4b-f1046457ac3c\") " pod="kube-system/coredns-668d6bf9bc-rnb6r" May 27 04:25:36.077645 kubelet[2884]: I0527 04:25:36.077655 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f598ecf-20f4-49a3-be4b-f1046457ac3c-config-volume\") pod \"coredns-668d6bf9bc-rnb6r\" (UID: \"3f598ecf-20f4-49a3-be4b-f1046457ac3c\") " pod="kube-system/coredns-668d6bf9bc-rnb6r" May 27 04:25:36.077921 kubelet[2884]: I0527 04:25:36.077692 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46a73350-ea47-4666-ba31-b64ad4cfdfaa-config-volume\") pod \"coredns-668d6bf9bc-6r6tc\" (UID: \"46a73350-ea47-4666-ba31-b64ad4cfdfaa\") " pod="kube-system/coredns-668d6bf9bc-6r6tc" May 27 04:25:36.077921 kubelet[2884]: I0527 04:25:36.077718 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlvqr\" (UniqueName: \"kubernetes.io/projected/46a73350-ea47-4666-ba31-b64ad4cfdfaa-kube-api-access-hlvqr\") pod \"coredns-668d6bf9bc-6r6tc\" (UID: \"46a73350-ea47-4666-ba31-b64ad4cfdfaa\") " pod="kube-system/coredns-668d6bf9bc-6r6tc" May 27 04:25:36.329776 containerd[1585]: time="2025-05-27T04:25:36.329089429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6r6tc,Uid:46a73350-ea47-4666-ba31-b64ad4cfdfaa,Namespace:kube-system,Attempt:0,}" May 27 04:25:36.340178 containerd[1585]: time="2025-05-27T04:25:36.339992495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnb6r,Uid:3f598ecf-20f4-49a3-be4b-f1046457ac3c,Namespace:kube-system,Attempt:0,}" May 27 04:25:38.688148 systemd-networkd[1530]: cilium_host: Link UP May 27 04:25:38.688476 systemd-networkd[1530]: cilium_net: Link UP May 27 04:25:38.688802 systemd-networkd[1530]: cilium_net: Gained carrier May 27 04:25:38.689082 systemd-networkd[1530]: cilium_host: Gained carrier May 27 04:25:38.738661 systemd-networkd[1530]: cilium_net: Gained IPv6LL May 27 04:25:38.860024 systemd-networkd[1530]: cilium_vxlan: Link UP May 27 04:25:38.860038 systemd-networkd[1530]: cilium_vxlan: Gained carrier May 27 04:25:39.387619 kernel: NET: Registered PF_ALG protocol family May 27 04:25:39.404759 systemd-networkd[1530]: cilium_host: Gained IPv6LL May 27 04:25:40.463441 systemd-networkd[1530]: lxc_health: Link UP May 27 04:25:40.474356 systemd-networkd[1530]: lxc_health: Gained carrier May 27 04:25:40.812621 systemd-networkd[1530]: cilium_vxlan: Gained IPv6LL May 27 04:25:40.969634 kernel: eth0: renamed from tmp63647 May 27 04:25:40.981051 systemd-networkd[1530]: lxc34c1a1d61961: Link UP May 27 04:25:40.984075 systemd-networkd[1530]: lxc34c1a1d61961: Gained carrier May 27 04:25:40.985290 systemd-networkd[1530]: lxc9cd29108b8a5: Link UP May 27 04:25:40.994301 kernel: eth0: renamed from tmp2768c May 27 04:25:40.997647 systemd-networkd[1530]: lxc9cd29108b8a5: Gained carrier May 27 04:25:42.149756 kubelet[2884]: I0527 04:25:42.148881 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qq6x9" podStartSLOduration=12.305888822 podStartE2EDuration="23.148844485s" podCreationTimestamp="2025-05-27 04:25:19 +0000 UTC" firstStartedPulling="2025-05-27 04:25:20.275868404 +0000 UTC m=+5.383649964" lastFinishedPulling="2025-05-27 04:25:31.118824061 +0000 UTC m=+16.226605627" observedRunningTime="2025-05-27 04:25:36.532787734 +0000 UTC m=+21.640569344" watchObservedRunningTime="2025-05-27 04:25:42.148844485 +0000 UTC m=+27.256626066" May 27 04:25:42.348669 systemd-networkd[1530]: lxc34c1a1d61961: Gained IPv6LL May 27 04:25:42.415758 systemd-networkd[1530]: lxc_health: Gained IPv6LL May 27 04:25:42.732823 systemd-networkd[1530]: lxc9cd29108b8a5: Gained IPv6LL May 27 04:25:46.782505 containerd[1585]: time="2025-05-27T04:25:46.782357731Z" level=info msg="connecting to shim 63647eb3636b5ecf53acc95bf8f5094800fd621d79676e6ca80e075e1b8f88ed" address="unix:///run/containerd/s/feb07301cabb94983431fe73c2f0f02700dc4df7ef2abcfbb68c7fd326ead172" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:46.783528 containerd[1585]: time="2025-05-27T04:25:46.783489821Z" level=info msg="connecting to shim 2768cb797bbe2a543c7fcf8426437da6224583a0966afcf647536327333382ff" address="unix:///run/containerd/s/c84df1339448f2069bce6d8faa0a3e4057c1e668bd54c1333aa0f5d0b83967ee" namespace=k8s.io protocol=ttrpc version=3 May 27 04:25:46.854883 systemd[1]: Started cri-containerd-2768cb797bbe2a543c7fcf8426437da6224583a0966afcf647536327333382ff.scope - libcontainer container 2768cb797bbe2a543c7fcf8426437da6224583a0966afcf647536327333382ff. May 27 04:25:46.863709 systemd[1]: Started cri-containerd-63647eb3636b5ecf53acc95bf8f5094800fd621d79676e6ca80e075e1b8f88ed.scope - libcontainer container 63647eb3636b5ecf53acc95bf8f5094800fd621d79676e6ca80e075e1b8f88ed. May 27 04:25:46.980238 containerd[1585]: time="2025-05-27T04:25:46.979942824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rnb6r,Uid:3f598ecf-20f4-49a3-be4b-f1046457ac3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"63647eb3636b5ecf53acc95bf8f5094800fd621d79676e6ca80e075e1b8f88ed\"" May 27 04:25:46.988049 containerd[1585]: time="2025-05-27T04:25:46.988003236Z" level=info msg="CreateContainer within sandbox \"63647eb3636b5ecf53acc95bf8f5094800fd621d79676e6ca80e075e1b8f88ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 04:25:46.990340 containerd[1585]: time="2025-05-27T04:25:46.989885942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6r6tc,Uid:46a73350-ea47-4666-ba31-b64ad4cfdfaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2768cb797bbe2a543c7fcf8426437da6224583a0966afcf647536327333382ff\"" May 27 04:25:46.996129 containerd[1585]: time="2025-05-27T04:25:46.995556414Z" level=info msg="CreateContainer within sandbox \"2768cb797bbe2a543c7fcf8426437da6224583a0966afcf647536327333382ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 04:25:47.010146 containerd[1585]: time="2025-05-27T04:25:47.010065728Z" level=info msg="Container 3e9117105ecb2d73cbee62e887cd3cf7d2dc89b7b16343654010613f7299d515: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:47.016050 containerd[1585]: time="2025-05-27T04:25:47.015957919Z" level=info msg="Container 695f3f7b66684f8009fbe938261ef849f9f3070c7abf2347c41b869e0892391f: CDI devices from CRI Config.CDIDevices: []" May 27 04:25:47.022090 containerd[1585]: time="2025-05-27T04:25:47.022033873Z" level=info msg="CreateContainer within sandbox \"63647eb3636b5ecf53acc95bf8f5094800fd621d79676e6ca80e075e1b8f88ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e9117105ecb2d73cbee62e887cd3cf7d2dc89b7b16343654010613f7299d515\"" May 27 04:25:47.023249 containerd[1585]: time="2025-05-27T04:25:47.023106729Z" level=info msg="StartContainer for \"3e9117105ecb2d73cbee62e887cd3cf7d2dc89b7b16343654010613f7299d515\"" May 27 04:25:47.026210 containerd[1585]: time="2025-05-27T04:25:47.026170832Z" level=info msg="connecting to shim 3e9117105ecb2d73cbee62e887cd3cf7d2dc89b7b16343654010613f7299d515" address="unix:///run/containerd/s/feb07301cabb94983431fe73c2f0f02700dc4df7ef2abcfbb68c7fd326ead172" protocol=ttrpc version=3 May 27 04:25:47.029662 containerd[1585]: time="2025-05-27T04:25:47.029609232Z" level=info msg="CreateContainer within sandbox \"2768cb797bbe2a543c7fcf8426437da6224583a0966afcf647536327333382ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"695f3f7b66684f8009fbe938261ef849f9f3070c7abf2347c41b869e0892391f\"" May 27 04:25:47.046253 containerd[1585]: time="2025-05-27T04:25:47.045171076Z" level=info msg="StartContainer for \"695f3f7b66684f8009fbe938261ef849f9f3070c7abf2347c41b869e0892391f\"" May 27 04:25:47.050964 containerd[1585]: time="2025-05-27T04:25:47.050905593Z" level=info msg="connecting to shim 695f3f7b66684f8009fbe938261ef849f9f3070c7abf2347c41b869e0892391f" address="unix:///run/containerd/s/c84df1339448f2069bce6d8faa0a3e4057c1e668bd54c1333aa0f5d0b83967ee" protocol=ttrpc version=3 May 27 04:25:47.073795 systemd[1]: Started cri-containerd-3e9117105ecb2d73cbee62e887cd3cf7d2dc89b7b16343654010613f7299d515.scope - libcontainer container 3e9117105ecb2d73cbee62e887cd3cf7d2dc89b7b16343654010613f7299d515. May 27 04:25:47.094845 systemd[1]: Started cri-containerd-695f3f7b66684f8009fbe938261ef849f9f3070c7abf2347c41b869e0892391f.scope - libcontainer container 695f3f7b66684f8009fbe938261ef849f9f3070c7abf2347c41b869e0892391f. May 27 04:25:47.160427 containerd[1585]: time="2025-05-27T04:25:47.160360190Z" level=info msg="StartContainer for \"695f3f7b66684f8009fbe938261ef849f9f3070c7abf2347c41b869e0892391f\" returns successfully" May 27 04:25:47.160821 containerd[1585]: time="2025-05-27T04:25:47.160731285Z" level=info msg="StartContainer for \"3e9117105ecb2d73cbee62e887cd3cf7d2dc89b7b16343654010613f7299d515\" returns successfully" May 27 04:25:47.422198 kubelet[2884]: I0527 04:25:47.422003 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rnb6r" podStartSLOduration=28.421969155 podStartE2EDuration="28.421969155s" podCreationTimestamp="2025-05-27 04:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:25:47.417525874 +0000 UTC m=+32.525307481" watchObservedRunningTime="2025-05-27 04:25:47.421969155 +0000 UTC m=+32.529750730" May 27 04:25:47.436579 kubelet[2884]: I0527 04:25:47.436047 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6r6tc" podStartSLOduration=28.436024333 podStartE2EDuration="28.436024333s" podCreationTimestamp="2025-05-27 04:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:25:47.435277915 +0000 UTC m=+32.543059520" watchObservedRunningTime="2025-05-27 04:25:47.436024333 +0000 UTC m=+32.543805920" May 27 04:25:47.754888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202100394.mount: Deactivated successfully. May 27 04:26:32.117370 systemd[1]: Started sshd@9-10.244.19.66:22-139.178.68.195:34696.service - OpenSSH per-connection server daemon (139.178.68.195:34696). May 27 04:26:33.103585 sshd[4196]: Accepted publickey for core from 139.178.68.195 port 34696 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:26:33.108959 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:26:33.121824 systemd-logind[1573]: New session 12 of user core. May 27 04:26:33.125765 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 04:26:34.286296 sshd[4200]: Connection closed by 139.178.68.195 port 34696 May 27 04:26:34.287578 sshd-session[4196]: pam_unix(sshd:session): session closed for user core May 27 04:26:34.292258 systemd[1]: sshd@9-10.244.19.66:22-139.178.68.195:34696.service: Deactivated successfully. May 27 04:26:34.295775 systemd[1]: session-12.scope: Deactivated successfully. May 27 04:26:34.299261 systemd-logind[1573]: Session 12 logged out. Waiting for processes to exit. May 27 04:26:34.300980 systemd-logind[1573]: Removed session 12. May 27 04:26:39.505616 systemd[1]: Started sshd@10-10.244.19.66:22-139.178.68.195:58918.service - OpenSSH per-connection server daemon (139.178.68.195:58918). May 27 04:26:40.450524 sshd[4213]: Accepted publickey for core from 139.178.68.195 port 58918 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:26:40.453500 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:26:40.464207 systemd-logind[1573]: New session 13 of user core. May 27 04:26:40.469633 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 04:26:41.199716 sshd[4215]: Connection closed by 139.178.68.195 port 58918 May 27 04:26:41.200617 sshd-session[4213]: pam_unix(sshd:session): session closed for user core May 27 04:26:41.205896 systemd[1]: sshd@10-10.244.19.66:22-139.178.68.195:58918.service: Deactivated successfully. May 27 04:26:41.209751 systemd[1]: session-13.scope: Deactivated successfully. May 27 04:26:41.211515 systemd-logind[1573]: Session 13 logged out. Waiting for processes to exit. May 27 04:26:41.214287 systemd-logind[1573]: Removed session 13. May 27 04:26:46.349531 systemd[1]: Started sshd@11-10.244.19.66:22-139.178.68.195:56540.service - OpenSSH per-connection server daemon (139.178.68.195:56540). May 27 04:26:47.286552 sshd[4229]: Accepted publickey for core from 139.178.68.195 port 56540 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:26:47.289660 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:26:47.308215 systemd-logind[1573]: New session 14 of user core. May 27 04:26:47.310633 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 04:26:48.008092 sshd[4232]: Connection closed by 139.178.68.195 port 56540 May 27 04:26:48.009066 sshd-session[4229]: pam_unix(sshd:session): session closed for user core May 27 04:26:48.014354 systemd[1]: sshd@11-10.244.19.66:22-139.178.68.195:56540.service: Deactivated successfully. May 27 04:26:48.016648 systemd[1]: session-14.scope: Deactivated successfully. May 27 04:26:48.018119 systemd-logind[1573]: Session 14 logged out. Waiting for processes to exit. May 27 04:26:48.020367 systemd-logind[1573]: Removed session 14. May 27 04:26:53.171028 systemd[1]: Started sshd@12-10.244.19.66:22-139.178.68.195:56544.service - OpenSSH per-connection server daemon (139.178.68.195:56544). May 27 04:26:54.115554 sshd[4247]: Accepted publickey for core from 139.178.68.195 port 56544 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:26:54.117983 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:26:54.128016 systemd-logind[1573]: New session 15 of user core. May 27 04:26:54.138118 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 04:26:54.838363 sshd[4249]: Connection closed by 139.178.68.195 port 56544 May 27 04:26:54.839573 sshd-session[4247]: pam_unix(sshd:session): session closed for user core May 27 04:26:54.845793 systemd[1]: sshd@12-10.244.19.66:22-139.178.68.195:56544.service: Deactivated successfully. May 27 04:26:54.848876 systemd[1]: session-15.scope: Deactivated successfully. May 27 04:26:54.850803 systemd-logind[1573]: Session 15 logged out. Waiting for processes to exit. May 27 04:26:54.853153 systemd-logind[1573]: Removed session 15. May 27 04:26:54.999592 systemd[1]: Started sshd@13-10.244.19.66:22-139.178.68.195:36162.service - OpenSSH per-connection server daemon (139.178.68.195:36162). May 27 04:26:55.912960 sshd[4262]: Accepted publickey for core from 139.178.68.195 port 36162 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:26:55.915589 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:26:55.922887 systemd-logind[1573]: New session 16 of user core. May 27 04:26:55.929643 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 04:26:56.715489 sshd[4264]: Connection closed by 139.178.68.195 port 36162 May 27 04:26:56.717012 sshd-session[4262]: pam_unix(sshd:session): session closed for user core May 27 04:26:56.730007 systemd[1]: sshd@13-10.244.19.66:22-139.178.68.195:36162.service: Deactivated successfully. May 27 04:26:56.733270 systemd[1]: session-16.scope: Deactivated successfully. May 27 04:26:56.735174 systemd-logind[1573]: Session 16 logged out. Waiting for processes to exit. May 27 04:26:56.739035 systemd-logind[1573]: Removed session 16. May 27 04:26:56.874226 systemd[1]: Started sshd@14-10.244.19.66:22-139.178.68.195:36164.service - OpenSSH per-connection server daemon (139.178.68.195:36164). May 27 04:26:57.832989 sshd[4274]: Accepted publickey for core from 139.178.68.195 port 36164 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:26:57.835094 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:26:57.842796 systemd-logind[1573]: New session 17 of user core. May 27 04:26:57.851678 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 04:26:58.568605 sshd[4276]: Connection closed by 139.178.68.195 port 36164 May 27 04:26:58.569556 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 27 04:26:58.575144 systemd[1]: sshd@14-10.244.19.66:22-139.178.68.195:36164.service: Deactivated successfully. May 27 04:26:58.578363 systemd[1]: session-17.scope: Deactivated successfully. May 27 04:26:58.580026 systemd-logind[1573]: Session 17 logged out. Waiting for processes to exit. May 27 04:26:58.582458 systemd-logind[1573]: Removed session 17. May 27 04:27:03.724544 systemd[1]: Started sshd@15-10.244.19.66:22-139.178.68.195:41730.service - OpenSSH per-connection server daemon (139.178.68.195:41730). May 27 04:27:04.641677 sshd[4288]: Accepted publickey for core from 139.178.68.195 port 41730 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:04.643752 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:04.650902 systemd-logind[1573]: New session 18 of user core. May 27 04:27:04.664849 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 04:27:05.361886 sshd[4290]: Connection closed by 139.178.68.195 port 41730 May 27 04:27:05.361702 sshd-session[4288]: pam_unix(sshd:session): session closed for user core May 27 04:27:05.367548 systemd[1]: sshd@15-10.244.19.66:22-139.178.68.195:41730.service: Deactivated successfully. May 27 04:27:05.372560 systemd[1]: session-18.scope: Deactivated successfully. May 27 04:27:05.375611 systemd-logind[1573]: Session 18 logged out. Waiting for processes to exit. May 27 04:27:05.378577 systemd-logind[1573]: Removed session 18. May 27 04:27:10.521007 systemd[1]: Started sshd@16-10.244.19.66:22-139.178.68.195:41738.service - OpenSSH per-connection server daemon (139.178.68.195:41738). May 27 04:27:11.438897 sshd[4302]: Accepted publickey for core from 139.178.68.195 port 41738 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:11.440671 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:11.448708 systemd-logind[1573]: New session 19 of user core. May 27 04:27:11.453727 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 04:27:12.148352 sshd[4304]: Connection closed by 139.178.68.195 port 41738 May 27 04:27:12.148590 sshd-session[4302]: pam_unix(sshd:session): session closed for user core May 27 04:27:12.153716 systemd[1]: sshd@16-10.244.19.66:22-139.178.68.195:41738.service: Deactivated successfully. May 27 04:27:12.155928 systemd[1]: session-19.scope: Deactivated successfully. May 27 04:27:12.157332 systemd-logind[1573]: Session 19 logged out. Waiting for processes to exit. May 27 04:27:12.159615 systemd-logind[1573]: Removed session 19. May 27 04:27:12.313014 systemd[1]: Started sshd@17-10.244.19.66:22-139.178.68.195:41746.service - OpenSSH per-connection server daemon (139.178.68.195:41746). May 27 04:27:13.235354 sshd[4316]: Accepted publickey for core from 139.178.68.195 port 41746 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:13.237373 sshd-session[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:13.244117 systemd-logind[1573]: New session 20 of user core. May 27 04:27:13.254662 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 04:27:14.219257 sshd[4318]: Connection closed by 139.178.68.195 port 41746 May 27 04:27:14.220051 sshd-session[4316]: pam_unix(sshd:session): session closed for user core May 27 04:27:14.228426 systemd[1]: sshd@17-10.244.19.66:22-139.178.68.195:41746.service: Deactivated successfully. May 27 04:27:14.231814 systemd[1]: session-20.scope: Deactivated successfully. May 27 04:27:14.233702 systemd-logind[1573]: Session 20 logged out. Waiting for processes to exit. May 27 04:27:14.237114 systemd-logind[1573]: Removed session 20. May 27 04:27:14.376681 systemd[1]: Started sshd@18-10.244.19.66:22-139.178.68.195:59380.service - OpenSSH per-connection server daemon (139.178.68.195:59380). May 27 04:27:15.324801 sshd[4328]: Accepted publickey for core from 139.178.68.195 port 59380 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:15.326995 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:15.335141 systemd-logind[1573]: New session 21 of user core. May 27 04:27:15.339657 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 04:27:17.180588 sshd[4332]: Connection closed by 139.178.68.195 port 59380 May 27 04:27:17.181848 sshd-session[4328]: pam_unix(sshd:session): session closed for user core May 27 04:27:17.192533 systemd[1]: sshd@18-10.244.19.66:22-139.178.68.195:59380.service: Deactivated successfully. May 27 04:27:17.195310 systemd[1]: session-21.scope: Deactivated successfully. May 27 04:27:17.198046 systemd-logind[1573]: Session 21 logged out. Waiting for processes to exit. May 27 04:27:17.200593 systemd-logind[1573]: Removed session 21. May 27 04:27:17.376783 systemd[1]: Started sshd@19-10.244.19.66:22-139.178.68.195:59386.service - OpenSSH per-connection server daemon (139.178.68.195:59386). May 27 04:27:18.301468 sshd[4350]: Accepted publickey for core from 139.178.68.195 port 59386 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:18.304552 sshd-session[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:18.314886 systemd-logind[1573]: New session 22 of user core. May 27 04:27:18.323723 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 04:27:19.268707 sshd[4352]: Connection closed by 139.178.68.195 port 59386 May 27 04:27:19.269718 sshd-session[4350]: pam_unix(sshd:session): session closed for user core May 27 04:27:19.277032 systemd[1]: sshd@19-10.244.19.66:22-139.178.68.195:59386.service: Deactivated successfully. May 27 04:27:19.280482 systemd[1]: session-22.scope: Deactivated successfully. May 27 04:27:19.282465 systemd-logind[1573]: Session 22 logged out. Waiting for processes to exit. May 27 04:27:19.285322 systemd-logind[1573]: Removed session 22. May 27 04:27:19.425288 systemd[1]: Started sshd@20-10.244.19.66:22-139.178.68.195:59394.service - OpenSSH per-connection server daemon (139.178.68.195:59394). May 27 04:27:20.332303 sshd[4362]: Accepted publickey for core from 139.178.68.195 port 59394 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:20.334233 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:20.341345 systemd-logind[1573]: New session 23 of user core. May 27 04:27:20.347613 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 04:27:21.046068 sshd[4364]: Connection closed by 139.178.68.195 port 59394 May 27 04:27:21.047126 sshd-session[4362]: pam_unix(sshd:session): session closed for user core May 27 04:27:21.053050 systemd[1]: sshd@20-10.244.19.66:22-139.178.68.195:59394.service: Deactivated successfully. May 27 04:27:21.056226 systemd[1]: session-23.scope: Deactivated successfully. May 27 04:27:21.057928 systemd-logind[1573]: Session 23 logged out. Waiting for processes to exit. May 27 04:27:21.060171 systemd-logind[1573]: Removed session 23. May 27 04:27:26.203037 systemd[1]: Started sshd@21-10.244.19.66:22-139.178.68.195:53758.service - OpenSSH per-connection server daemon (139.178.68.195:53758). May 27 04:27:27.118917 sshd[4380]: Accepted publickey for core from 139.178.68.195 port 53758 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:27.120985 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:27.129037 systemd-logind[1573]: New session 24 of user core. May 27 04:27:27.136610 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 04:27:27.825547 sshd[4382]: Connection closed by 139.178.68.195 port 53758 May 27 04:27:27.824065 sshd-session[4380]: pam_unix(sshd:session): session closed for user core May 27 04:27:27.833170 systemd-logind[1573]: Session 24 logged out. Waiting for processes to exit. May 27 04:27:27.835275 systemd[1]: sshd@21-10.244.19.66:22-139.178.68.195:53758.service: Deactivated successfully. May 27 04:27:27.839801 systemd[1]: session-24.scope: Deactivated successfully. May 27 04:27:27.843145 systemd-logind[1573]: Removed session 24. May 27 04:27:32.981968 systemd[1]: Started sshd@22-10.244.19.66:22-139.178.68.195:53764.service - OpenSSH per-connection server daemon (139.178.68.195:53764). May 27 04:27:33.912039 sshd[4393]: Accepted publickey for core from 139.178.68.195 port 53764 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:33.913995 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:33.922353 systemd-logind[1573]: New session 25 of user core. May 27 04:27:33.927728 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 04:27:34.634424 sshd[4395]: Connection closed by 139.178.68.195 port 53764 May 27 04:27:34.635857 sshd-session[4393]: pam_unix(sshd:session): session closed for user core May 27 04:27:34.644212 systemd[1]: sshd@22-10.244.19.66:22-139.178.68.195:53764.service: Deactivated successfully. May 27 04:27:34.647793 systemd[1]: session-25.scope: Deactivated successfully. May 27 04:27:34.651244 systemd-logind[1573]: Session 25 logged out. Waiting for processes to exit. May 27 04:27:34.652842 systemd-logind[1573]: Removed session 25. May 27 04:27:39.796116 systemd[1]: Started sshd@23-10.244.19.66:22-139.178.68.195:54314.service - OpenSSH per-connection server daemon (139.178.68.195:54314). May 27 04:27:40.722970 sshd[4407]: Accepted publickey for core from 139.178.68.195 port 54314 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:40.725515 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:40.732821 systemd-logind[1573]: New session 26 of user core. May 27 04:27:40.739615 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 04:27:41.429125 sshd[4409]: Connection closed by 139.178.68.195 port 54314 May 27 04:27:41.430083 sshd-session[4407]: pam_unix(sshd:session): session closed for user core May 27 04:27:41.435640 systemd[1]: sshd@23-10.244.19.66:22-139.178.68.195:54314.service: Deactivated successfully. May 27 04:27:41.437959 systemd[1]: session-26.scope: Deactivated successfully. May 27 04:27:41.439247 systemd-logind[1573]: Session 26 logged out. Waiting for processes to exit. May 27 04:27:41.441738 systemd-logind[1573]: Removed session 26. May 27 04:27:41.587518 systemd[1]: Started sshd@24-10.244.19.66:22-139.178.68.195:54322.service - OpenSSH per-connection server daemon (139.178.68.195:54322). May 27 04:27:42.498826 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 54322 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:42.501473 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:42.509796 systemd-logind[1573]: New session 27 of user core. May 27 04:27:42.514648 systemd[1]: Started session-27.scope - Session 27 of User core. May 27 04:27:44.612332 containerd[1585]: time="2025-05-27T04:27:44.612206309Z" level=info msg="StopContainer for \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" with timeout 30 (s)" May 27 04:27:44.616480 containerd[1585]: time="2025-05-27T04:27:44.615167621Z" level=info msg="Stop container \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" with signal terminated" May 27 04:27:44.691161 systemd[1]: cri-containerd-d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9.scope: Deactivated successfully. May 27 04:27:44.699083 containerd[1585]: time="2025-05-27T04:27:44.698752003Z" level=info msg="received exit event container_id:\"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" id:\"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" pid:3467 exited_at:{seconds:1748320064 nanos:697229286}" May 27 04:27:44.699920 containerd[1585]: time="2025-05-27T04:27:44.699888609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" id:\"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" pid:3467 exited_at:{seconds:1748320064 nanos:697229286}" May 27 04:27:44.707192 containerd[1585]: time="2025-05-27T04:27:44.707132355Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 04:27:44.725047 containerd[1585]: time="2025-05-27T04:27:44.724977506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" id:\"712d852c65c683a30bf939f67b792e15dedde45e565bbaffdc99c6b3e0eb9b2c\" pid:4449 exited_at:{seconds:1748320064 nanos:722925151}" May 27 04:27:44.730585 containerd[1585]: time="2025-05-27T04:27:44.730425554Z" level=info msg="StopContainer for \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" with timeout 2 (s)" May 27 04:27:44.731205 containerd[1585]: time="2025-05-27T04:27:44.731174857Z" level=info msg="Stop container \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" with signal terminated" May 27 04:27:44.750029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9-rootfs.mount: Deactivated successfully. May 27 04:27:44.752654 systemd-networkd[1530]: lxc_health: Link DOWN May 27 04:27:44.753142 systemd-networkd[1530]: lxc_health: Lost carrier May 27 04:27:44.771444 containerd[1585]: time="2025-05-27T04:27:44.771066908Z" level=info msg="StopContainer for \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" returns successfully" May 27 04:27:44.772335 systemd[1]: cri-containerd-f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c.scope: Deactivated successfully. May 27 04:27:44.773510 systemd[1]: cri-containerd-f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c.scope: Consumed 10.300s CPU time, 198.8M memory peak, 76.2M read from disk, 13.3M written to disk. May 27 04:27:44.774536 containerd[1585]: time="2025-05-27T04:27:44.774501558Z" level=info msg="StopPodSandbox for \"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\"" May 27 04:27:44.774850 containerd[1585]: time="2025-05-27T04:27:44.774818457Z" level=info msg="Container to stop \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:27:44.775940 containerd[1585]: time="2025-05-27T04:27:44.775811814Z" level=info msg="received exit event container_id:\"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" id:\"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" pid:3518 exited_at:{seconds:1748320064 nanos:775458035}" May 27 04:27:44.776516 containerd[1585]: time="2025-05-27T04:27:44.775909537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" id:\"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" pid:3518 exited_at:{seconds:1748320064 nanos:775458035}" May 27 04:27:44.801086 systemd[1]: cri-containerd-c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97.scope: Deactivated successfully. May 27 04:27:44.804648 containerd[1585]: time="2025-05-27T04:27:44.804532159Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\" id:\"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\" pid:3097 exit_status:137 exited_at:{seconds:1748320064 nanos:803878640}" May 27 04:27:44.827442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c-rootfs.mount: Deactivated successfully. May 27 04:27:44.844900 containerd[1585]: time="2025-05-27T04:27:44.844759505Z" level=info msg="StopContainer for \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" returns successfully" May 27 04:27:44.846677 containerd[1585]: time="2025-05-27T04:27:44.846510215Z" level=info msg="StopPodSandbox for \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\"" May 27 04:27:44.847007 containerd[1585]: time="2025-05-27T04:27:44.846954310Z" level=info msg="Container to stop \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:27:44.847224 containerd[1585]: time="2025-05-27T04:27:44.847199306Z" level=info msg="Container to stop \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:27:44.847425 containerd[1585]: time="2025-05-27T04:27:44.847357951Z" level=info msg="Container to stop \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:27:44.847425 containerd[1585]: time="2025-05-27T04:27:44.847379158Z" level=info msg="Container to stop \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:27:44.847723 containerd[1585]: time="2025-05-27T04:27:44.847698405Z" level=info msg="Container to stop \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 04:27:44.861688 systemd[1]: cri-containerd-4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686.scope: Deactivated successfully. May 27 04:27:44.876056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97-rootfs.mount: Deactivated successfully. May 27 04:27:44.880487 containerd[1585]: time="2025-05-27T04:27:44.879343155Z" level=info msg="shim disconnected" id=c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97 namespace=k8s.io May 27 04:27:44.880615 containerd[1585]: time="2025-05-27T04:27:44.880494819Z" level=warning msg="cleaning up after shim disconnected" id=c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97 namespace=k8s.io May 27 04:27:44.897672 containerd[1585]: time="2025-05-27T04:27:44.880516302Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 04:27:44.910881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686-rootfs.mount: Deactivated successfully. May 27 04:27:44.915358 containerd[1585]: time="2025-05-27T04:27:44.915271523Z" level=info msg="shim disconnected" id=4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686 namespace=k8s.io May 27 04:27:44.915358 containerd[1585]: time="2025-05-27T04:27:44.915345269Z" level=warning msg="cleaning up after shim disconnected" id=4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686 namespace=k8s.io May 27 04:27:44.915590 containerd[1585]: time="2025-05-27T04:27:44.915363004Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 04:27:44.935424 containerd[1585]: time="2025-05-27T04:27:44.934870501Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" id:\"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" pid:3032 exit_status:137 exited_at:{seconds:1748320064 nanos:864705784}" May 27 04:27:44.935424 containerd[1585]: time="2025-05-27T04:27:44.935117397Z" level=info msg="received exit event sandbox_id:\"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" exit_status:137 exited_at:{seconds:1748320064 nanos:864705784}" May 27 04:27:44.935853 containerd[1585]: time="2025-05-27T04:27:44.935822087Z" level=info msg="received exit event sandbox_id:\"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\" exit_status:137 exited_at:{seconds:1748320064 nanos:803878640}" May 27 04:27:44.941738 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97-shm.mount: Deactivated successfully. May 27 04:27:44.949894 containerd[1585]: time="2025-05-27T04:27:44.949833391Z" level=info msg="TearDown network for sandbox \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" successfully" May 27 04:27:44.950119 containerd[1585]: time="2025-05-27T04:27:44.950091089Z" level=info msg="StopPodSandbox for \"4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686\" returns successfully" May 27 04:27:44.954519 containerd[1585]: time="2025-05-27T04:27:44.954464880Z" level=info msg="TearDown network for sandbox \"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\" successfully" May 27 04:27:44.954699 containerd[1585]: time="2025-05-27T04:27:44.954674678Z" level=info msg="StopPodSandbox for \"c87ae85ccfa9b87bfacd830e4230dbd77d2319eff67819a99b82c1ad1231cc97\" returns successfully" May 27 04:27:45.077693 kubelet[2884]: I0527 04:27:45.071271 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-clustermesh-secrets\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.077693 kubelet[2884]: I0527 04:27:45.071765 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-xtables-lock\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.077693 kubelet[2884]: I0527 04:27:45.071809 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-etc-cni-netd\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.077693 kubelet[2884]: I0527 04:27:45.071959 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-bpf-maps\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.077693 kubelet[2884]: I0527 04:27:45.071987 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hostproc\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.077693 kubelet[2884]: I0527 04:27:45.072140 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cni-path\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078696 kubelet[2884]: I0527 04:27:45.072294 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hubble-tls\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078696 kubelet[2884]: I0527 04:27:45.072342 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-run\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078696 kubelet[2884]: I0527 04:27:45.072452 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-lib-modules\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078696 kubelet[2884]: I0527 04:27:45.072481 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-net\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078696 kubelet[2884]: I0527 04:27:45.072630 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/277c45c8-5acc-4d73-a9b9-4e89f7655c63-cilium-config-path\") pod \"277c45c8-5acc-4d73-a9b9-4e89f7655c63\" (UID: \"277c45c8-5acc-4d73-a9b9-4e89f7655c63\") " May 27 04:27:45.078696 kubelet[2884]: I0527 04:27:45.072784 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-kernel\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078976 kubelet[2884]: I0527 04:27:45.072819 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-cgroup\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078976 kubelet[2884]: I0527 04:27:45.073168 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pqtd\" (UniqueName: \"kubernetes.io/projected/277c45c8-5acc-4d73-a9b9-4e89f7655c63-kube-api-access-5pqtd\") pod \"277c45c8-5acc-4d73-a9b9-4e89f7655c63\" (UID: \"277c45c8-5acc-4d73-a9b9-4e89f7655c63\") " May 27 04:27:45.078976 kubelet[2884]: I0527 04:27:45.073355 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v664j\" (UniqueName: \"kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-kube-api-access-v664j\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.078976 kubelet[2884]: I0527 04:27:45.073528 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-config-path\") pod \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\" (UID: \"4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42\") " May 27 04:27:45.084520 kubelet[2884]: I0527 04:27:45.082241 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 04:27:45.084520 kubelet[2884]: I0527 04:27:45.082600 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.084520 kubelet[2884]: I0527 04:27:45.082641 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.084520 kubelet[2884]: I0527 04:27:45.082674 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.084520 kubelet[2884]: I0527 04:27:45.082736 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hostproc" (OuterVolumeSpecName: "hostproc") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.085431 kubelet[2884]: I0527 04:27:45.084948 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cni-path" (OuterVolumeSpecName: "cni-path") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.095225 kubelet[2884]: I0527 04:27:45.095146 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.096679 kubelet[2884]: I0527 04:27:45.096640 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.097384 kubelet[2884]: I0527 04:27:45.097355 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.097558 kubelet[2884]: I0527 04:27:45.097533 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.097707 kubelet[2884]: I0527 04:27:45.097682 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 04:27:45.105725 kubelet[2884]: I0527 04:27:45.105655 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 04:27:45.106244 kubelet[2884]: I0527 04:27:45.106149 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-kube-api-access-v664j" (OuterVolumeSpecName: "kube-api-access-v664j") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "kube-api-access-v664j". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 04:27:45.107804 kubelet[2884]: I0527 04:27:45.107758 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" (UID: "4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 04:27:45.108686 kubelet[2884]: I0527 04:27:45.108629 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277c45c8-5acc-4d73-a9b9-4e89f7655c63-kube-api-access-5pqtd" (OuterVolumeSpecName: "kube-api-access-5pqtd") pod "277c45c8-5acc-4d73-a9b9-4e89f7655c63" (UID: "277c45c8-5acc-4d73-a9b9-4e89f7655c63"). InnerVolumeSpecName "kube-api-access-5pqtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 04:27:45.109147 kubelet[2884]: I0527 04:27:45.109071 2884 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277c45c8-5acc-4d73-a9b9-4e89f7655c63-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "277c45c8-5acc-4d73-a9b9-4e89f7655c63" (UID: "277c45c8-5acc-4d73-a9b9-4e89f7655c63"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 04:27:45.167885 systemd[1]: Removed slice kubepods-besteffort-pod277c45c8_5acc_4d73_a9b9_4e89f7655c63.slice - libcontainer container kubepods-besteffort-pod277c45c8_5acc_4d73_a9b9_4e89f7655c63.slice. May 27 04:27:45.171055 systemd[1]: Removed slice kubepods-burstable-pod4c0f3c6f_4c94_4778_94de_7f5a7f1a3e42.slice - libcontainer container kubepods-burstable-pod4c0f3c6f_4c94_4778_94de_7f5a7f1a3e42.slice. May 27 04:27:45.171197 systemd[1]: kubepods-burstable-pod4c0f3c6f_4c94_4778_94de_7f5a7f1a3e42.slice: Consumed 10.447s CPU time, 199.1M memory peak, 77.2M read from disk, 13.3M written to disk. May 27 04:27:45.174086 kubelet[2884]: I0527 04:27:45.174048 2884 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cni-path\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174092 2884 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hubble-tls\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174109 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-run\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174123 2884 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-lib-modules\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174137 2884 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-net\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174155 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/277c45c8-5acc-4d73-a9b9-4e89f7655c63-cilium-config-path\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174171 2884 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-host-proc-sys-kernel\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174188 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-cgroup\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174209 kubelet[2884]: I0527 04:27:45.174203 2884 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5pqtd\" (UniqueName: \"kubernetes.io/projected/277c45c8-5acc-4d73-a9b9-4e89f7655c63-kube-api-access-5pqtd\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174618 kubelet[2884]: I0527 04:27:45.174218 2884 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v664j\" (UniqueName: \"kubernetes.io/projected/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-kube-api-access-v664j\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174618 kubelet[2884]: I0527 04:27:45.174231 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-cilium-config-path\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174618 kubelet[2884]: I0527 04:27:45.174246 2884 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-clustermesh-secrets\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174618 kubelet[2884]: I0527 04:27:45.174262 2884 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-xtables-lock\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174618 kubelet[2884]: I0527 04:27:45.174276 2884 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-etc-cni-netd\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174618 kubelet[2884]: I0527 04:27:45.174289 2884 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-bpf-maps\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.174618 kubelet[2884]: I0527 04:27:45.174302 2884 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42-hostproc\") on node \"srv-g11ua.gb1.brightbox.com\" DevicePath \"\"" May 27 04:27:45.352062 kubelet[2884]: E0527 04:27:45.351993 2884 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 04:27:45.742746 kubelet[2884]: I0527 04:27:45.742609 2884 scope.go:117] "RemoveContainer" containerID="d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9" May 27 04:27:45.747124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e3fb1663af16a28d51e554ba0db4356d3d77d5691a48c284718e67a1c3dc686-shm.mount: Deactivated successfully. May 27 04:27:45.747551 systemd[1]: var-lib-kubelet-pods-277c45c8\x2d5acc\x2d4d73\x2da9b9\x2d4e89f7655c63-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5pqtd.mount: Deactivated successfully. May 27 04:27:45.747772 systemd[1]: var-lib-kubelet-pods-4c0f3c6f\x2d4c94\x2d4778\x2d94de\x2d7f5a7f1a3e42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv664j.mount: Deactivated successfully. May 27 04:27:45.747960 systemd[1]: var-lib-kubelet-pods-4c0f3c6f\x2d4c94\x2d4778\x2d94de\x2d7f5a7f1a3e42-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 04:27:45.748140 systemd[1]: var-lib-kubelet-pods-4c0f3c6f\x2d4c94\x2d4778\x2d94de\x2d7f5a7f1a3e42-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 04:27:45.753435 containerd[1585]: time="2025-05-27T04:27:45.752365491Z" level=info msg="RemoveContainer for \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\"" May 27 04:27:45.764379 containerd[1585]: time="2025-05-27T04:27:45.764159100Z" level=info msg="RemoveContainer for \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" returns successfully" May 27 04:27:45.765424 kubelet[2884]: I0527 04:27:45.765372 2884 scope.go:117] "RemoveContainer" containerID="d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9" May 27 04:27:45.770815 containerd[1585]: time="2025-05-27T04:27:45.766729897Z" level=error msg="ContainerStatus for \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\": not found" May 27 04:27:45.771143 kubelet[2884]: E0527 04:27:45.771103 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\": not found" containerID="d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9" May 27 04:27:45.771434 kubelet[2884]: I0527 04:27:45.771317 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9"} err="failed to get container status \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d04aeb6f3c92d8f7034988ab2a702ad734fdd5a3bef3f1b4b7ed236d12f143e9\": not found" May 27 04:27:45.772135 kubelet[2884]: I0527 04:27:45.772011 2884 scope.go:117] "RemoveContainer" containerID="f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c" May 27 04:27:45.776444 containerd[1585]: time="2025-05-27T04:27:45.776192196Z" level=info msg="RemoveContainer for \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\"" May 27 04:27:45.783460 containerd[1585]: time="2025-05-27T04:27:45.783341430Z" level=info msg="RemoveContainer for \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" returns successfully" May 27 04:27:45.783807 kubelet[2884]: I0527 04:27:45.783778 2884 scope.go:117] "RemoveContainer" containerID="95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f" May 27 04:27:45.786344 containerd[1585]: time="2025-05-27T04:27:45.786299749Z" level=info msg="RemoveContainer for \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\"" May 27 04:27:45.791683 containerd[1585]: time="2025-05-27T04:27:45.791448884Z" level=info msg="RemoveContainer for \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" returns successfully" May 27 04:27:45.792752 kubelet[2884]: I0527 04:27:45.792691 2884 scope.go:117] "RemoveContainer" containerID="632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f" May 27 04:27:45.796443 containerd[1585]: time="2025-05-27T04:27:45.796374457Z" level=info msg="RemoveContainer for \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\"" May 27 04:27:45.803586 containerd[1585]: time="2025-05-27T04:27:45.803533796Z" level=info msg="RemoveContainer for \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" returns successfully" May 27 04:27:45.807779 kubelet[2884]: I0527 04:27:45.807744 2884 scope.go:117] "RemoveContainer" containerID="ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e" May 27 04:27:45.812501 containerd[1585]: time="2025-05-27T04:27:45.812460627Z" level=info msg="RemoveContainer for \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\"" May 27 04:27:45.817321 containerd[1585]: time="2025-05-27T04:27:45.817267304Z" level=info msg="RemoveContainer for \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" returns successfully" May 27 04:27:45.817871 kubelet[2884]: I0527 04:27:45.817723 2884 scope.go:117] "RemoveContainer" containerID="6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480" May 27 04:27:45.819714 containerd[1585]: time="2025-05-27T04:27:45.819686807Z" level=info msg="RemoveContainer for \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\"" May 27 04:27:45.823173 containerd[1585]: time="2025-05-27T04:27:45.823125125Z" level=info msg="RemoveContainer for \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" returns successfully" May 27 04:27:45.823826 kubelet[2884]: I0527 04:27:45.823791 2884 scope.go:117] "RemoveContainer" containerID="f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c" May 27 04:27:45.824081 containerd[1585]: time="2025-05-27T04:27:45.824041895Z" level=error msg="ContainerStatus for \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\": not found" May 27 04:27:45.824430 kubelet[2884]: E0527 04:27:45.824370 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\": not found" containerID="f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c" May 27 04:27:45.824505 kubelet[2884]: I0527 04:27:45.824442 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c"} err="failed to get container status \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f2edfdf3d3467864fbdf7d03369d59d312c0f0fe1796c4a9f2bd2cc37078d57c\": not found" May 27 04:27:45.824505 kubelet[2884]: I0527 04:27:45.824473 2884 scope.go:117] "RemoveContainer" containerID="95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f" May 27 04:27:45.824850 containerd[1585]: time="2025-05-27T04:27:45.824811114Z" level=error msg="ContainerStatus for \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\": not found" May 27 04:27:45.825065 kubelet[2884]: E0527 04:27:45.824990 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\": not found" containerID="95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f" May 27 04:27:45.825129 kubelet[2884]: I0527 04:27:45.825071 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f"} err="failed to get container status \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\": rpc error: code = NotFound desc = an error occurred when try to find container \"95a4aafd40a5daefef8f3f7fd5559cd3a0854598dfd5a13b2acd112ad348ee1f\": not found" May 27 04:27:45.825129 kubelet[2884]: I0527 04:27:45.825125 2884 scope.go:117] "RemoveContainer" containerID="632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f" May 27 04:27:45.825417 containerd[1585]: time="2025-05-27T04:27:45.825359085Z" level=error msg="ContainerStatus for \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\": not found" May 27 04:27:45.825758 kubelet[2884]: E0527 04:27:45.825730 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\": not found" containerID="632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f" May 27 04:27:45.825840 kubelet[2884]: I0527 04:27:45.825764 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f"} err="failed to get container status \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\": rpc error: code = NotFound desc = an error occurred when try to find container \"632e6b8ef7eaa8d4d90f0f7950393a3eb8cef1534e06a03f955103e34238834f\": not found" May 27 04:27:45.825840 kubelet[2884]: I0527 04:27:45.825812 2884 scope.go:117] "RemoveContainer" containerID="ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e" May 27 04:27:45.826085 containerd[1585]: time="2025-05-27T04:27:45.826048232Z" level=error msg="ContainerStatus for \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\": not found" May 27 04:27:45.826388 kubelet[2884]: E0527 04:27:45.826354 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\": not found" containerID="ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e" May 27 04:27:45.826467 kubelet[2884]: I0527 04:27:45.826390 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e"} err="failed to get container status \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ed1c2d18cfcc5c178d069f019fdd97a5e1857d86221caeef0b1a59097cd4876e\": not found" May 27 04:27:45.826467 kubelet[2884]: I0527 04:27:45.826429 2884 scope.go:117] "RemoveContainer" containerID="6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480" May 27 04:27:45.826636 containerd[1585]: time="2025-05-27T04:27:45.826599020Z" level=error msg="ContainerStatus for \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\": not found" May 27 04:27:45.826779 kubelet[2884]: E0527 04:27:45.826750 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\": not found" containerID="6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480" May 27 04:27:45.826830 kubelet[2884]: I0527 04:27:45.826786 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480"} err="failed to get container status \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a2263ba4ef16f0c0df9b21fcd828df853eb26e2b78826178d4d50ead191e480\": not found" May 27 04:27:46.655428 sshd[4423]: Connection closed by 139.178.68.195 port 54322 May 27 04:27:46.656045 sshd-session[4421]: pam_unix(sshd:session): session closed for user core May 27 04:27:46.662350 systemd[1]: sshd@24-10.244.19.66:22-139.178.68.195:54322.service: Deactivated successfully. May 27 04:27:46.666657 systemd[1]: session-27.scope: Deactivated successfully. May 27 04:27:46.669206 systemd-logind[1573]: Session 27 logged out. Waiting for processes to exit. May 27 04:27:46.671586 systemd-logind[1573]: Removed session 27. May 27 04:27:46.817477 systemd[1]: Started sshd@25-10.244.19.66:22-139.178.68.195:54168.service - OpenSSH per-connection server daemon (139.178.68.195:54168). May 27 04:27:47.147270 kubelet[2884]: I0527 04:27:47.147151 2884 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277c45c8-5acc-4d73-a9b9-4e89f7655c63" path="/var/lib/kubelet/pods/277c45c8-5acc-4d73-a9b9-4e89f7655c63/volumes" May 27 04:27:47.148045 kubelet[2884]: I0527 04:27:47.147997 2884 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" path="/var/lib/kubelet/pods/4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42/volumes" May 27 04:27:47.746672 sshd[4577]: Accepted publickey for core from 139.178.68.195 port 54168 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:47.748895 sshd-session[4577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:47.756889 systemd-logind[1573]: New session 28 of user core. May 27 04:27:47.762645 systemd[1]: Started session-28.scope - Session 28 of User core. May 27 04:27:48.229727 kubelet[2884]: I0527 04:27:48.229662 2884 setters.go:602] "Node became not ready" node="srv-g11ua.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T04:27:48Z","lastTransitionTime":"2025-05-27T04:27:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 04:27:48.925424 kubelet[2884]: I0527 04:27:48.923004 2884 memory_manager.go:355] "RemoveStaleState removing state" podUID="277c45c8-5acc-4d73-a9b9-4e89f7655c63" containerName="cilium-operator" May 27 04:27:48.925424 kubelet[2884]: I0527 04:27:48.923046 2884 memory_manager.go:355] "RemoveStaleState removing state" podUID="4c0f3c6f-4c94-4778-94de-7f5a7f1a3e42" containerName="cilium-agent" May 27 04:27:48.938492 systemd[1]: Created slice kubepods-burstable-podca860882_6e2e_47ee_bf6c_4a0b489d6bc3.slice - libcontainer container kubepods-burstable-podca860882_6e2e_47ee_bf6c_4a0b489d6bc3.slice. May 27 04:27:49.081122 sshd[4579]: Connection closed by 139.178.68.195 port 54168 May 27 04:27:49.082037 sshd-session[4577]: pam_unix(sshd:session): session closed for user core May 27 04:27:49.086579 systemd[1]: sshd@25-10.244.19.66:22-139.178.68.195:54168.service: Deactivated successfully. May 27 04:27:49.089887 systemd[1]: session-28.scope: Deactivated successfully. May 27 04:27:49.091936 systemd-logind[1573]: Session 28 logged out. Waiting for processes to exit. May 27 04:27:49.094899 systemd-logind[1573]: Removed session 28. May 27 04:27:49.098741 kubelet[2884]: I0527 04:27:49.098680 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-cilium-run\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.098891 kubelet[2884]: I0527 04:27:49.098861 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-clustermesh-secrets\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099509 kubelet[2884]: I0527 04:27:49.098984 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-host-proc-sys-kernel\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099509 kubelet[2884]: I0527 04:27:49.099021 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ccq\" (UniqueName: \"kubernetes.io/projected/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-kube-api-access-b5ccq\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099509 kubelet[2884]: I0527 04:27:49.099050 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-hostproc\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099509 kubelet[2884]: I0527 04:27:49.099078 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-cilium-ipsec-secrets\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099509 kubelet[2884]: I0527 04:27:49.099118 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-cni-path\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099509 kubelet[2884]: I0527 04:27:49.099144 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-etc-cni-netd\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099805 kubelet[2884]: I0527 04:27:49.099170 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-lib-modules\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099805 kubelet[2884]: I0527 04:27:49.099194 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-host-proc-sys-net\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099805 kubelet[2884]: I0527 04:27:49.099224 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-bpf-maps\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099805 kubelet[2884]: I0527 04:27:49.099252 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-xtables-lock\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099805 kubelet[2884]: I0527 04:27:49.099289 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-cilium-cgroup\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.099805 kubelet[2884]: I0527 04:27:49.099323 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-cilium-config-path\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.100113 kubelet[2884]: I0527 04:27:49.099354 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ca860882-6e2e-47ee-bf6c-4a0b489d6bc3-hubble-tls\") pod \"cilium-9n6qk\" (UID: \"ca860882-6e2e-47ee-bf6c-4a0b489d6bc3\") " pod="kube-system/cilium-9n6qk" May 27 04:27:49.244500 containerd[1585]: time="2025-05-27T04:27:49.244268172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9n6qk,Uid:ca860882-6e2e-47ee-bf6c-4a0b489d6bc3,Namespace:kube-system,Attempt:0,}" May 27 04:27:49.249851 systemd[1]: Started sshd@26-10.244.19.66:22-139.178.68.195:54180.service - OpenSSH per-connection server daemon (139.178.68.195:54180). May 27 04:27:49.287440 containerd[1585]: time="2025-05-27T04:27:49.286746703Z" level=info msg="connecting to shim 4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5" address="unix:///run/containerd/s/7f0aa7f8b183f86ff09ac94384699c04b6055fcbc51eca62164bea56d5c2c26d" namespace=k8s.io protocol=ttrpc version=3 May 27 04:27:49.325652 systemd[1]: Started cri-containerd-4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5.scope - libcontainer container 4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5. May 27 04:27:49.361768 containerd[1585]: time="2025-05-27T04:27:49.361657817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9n6qk,Uid:ca860882-6e2e-47ee-bf6c-4a0b489d6bc3,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\"" May 27 04:27:49.366313 containerd[1585]: time="2025-05-27T04:27:49.365841030Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 04:27:49.375822 containerd[1585]: time="2025-05-27T04:27:49.375782452Z" level=info msg="Container 88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1: CDI devices from CRI Config.CDIDevices: []" May 27 04:27:49.382045 containerd[1585]: time="2025-05-27T04:27:49.381998760Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1\"" May 27 04:27:49.384651 containerd[1585]: time="2025-05-27T04:27:49.384459087Z" level=info msg="StartContainer for \"88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1\"" May 27 04:27:49.387808 containerd[1585]: time="2025-05-27T04:27:49.387759799Z" level=info msg="connecting to shim 88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1" address="unix:///run/containerd/s/7f0aa7f8b183f86ff09ac94384699c04b6055fcbc51eca62164bea56d5c2c26d" protocol=ttrpc version=3 May 27 04:27:49.420662 systemd[1]: Started cri-containerd-88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1.scope - libcontainer container 88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1. May 27 04:27:49.464331 containerd[1585]: time="2025-05-27T04:27:49.464103205Z" level=info msg="StartContainer for \"88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1\" returns successfully" May 27 04:27:49.483019 systemd[1]: cri-containerd-88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1.scope: Deactivated successfully. May 27 04:27:49.484014 systemd[1]: cri-containerd-88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1.scope: Consumed 31ms CPU time, 9.1M memory peak, 2.8M read from disk. May 27 04:27:49.488295 containerd[1585]: time="2025-05-27T04:27:49.488052633Z" level=info msg="received exit event container_id:\"88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1\" id:\"88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1\" pid:4655 exited_at:{seconds:1748320069 nanos:487344027}" May 27 04:27:49.488817 containerd[1585]: time="2025-05-27T04:27:49.488737863Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1\" id:\"88ff529d295ed1864206daebba49c4a0576ede7ff83673d0d9783ca0a3f2e2b1\" pid:4655 exited_at:{seconds:1748320069 nanos:487344027}" May 27 04:27:49.776706 containerd[1585]: time="2025-05-27T04:27:49.776629692Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 04:27:49.788242 containerd[1585]: time="2025-05-27T04:27:49.788165387Z" level=info msg="Container 6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992: CDI devices from CRI Config.CDIDevices: []" May 27 04:27:49.796094 containerd[1585]: time="2025-05-27T04:27:49.796003500Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992\"" May 27 04:27:49.798869 containerd[1585]: time="2025-05-27T04:27:49.798059591Z" level=info msg="StartContainer for \"6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992\"" May 27 04:27:49.801130 containerd[1585]: time="2025-05-27T04:27:49.801001944Z" level=info msg="connecting to shim 6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992" address="unix:///run/containerd/s/7f0aa7f8b183f86ff09ac94384699c04b6055fcbc51eca62164bea56d5c2c26d" protocol=ttrpc version=3 May 27 04:27:49.828648 systemd[1]: Started cri-containerd-6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992.scope - libcontainer container 6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992. May 27 04:27:49.872674 containerd[1585]: time="2025-05-27T04:27:49.872627577Z" level=info msg="StartContainer for \"6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992\" returns successfully" May 27 04:27:49.886417 systemd[1]: cri-containerd-6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992.scope: Deactivated successfully. May 27 04:27:49.886824 systemd[1]: cri-containerd-6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992.scope: Consumed 28ms CPU time, 7.1M memory peak, 1.9M read from disk. May 27 04:27:49.888613 containerd[1585]: time="2025-05-27T04:27:49.888533656Z" level=info msg="received exit event container_id:\"6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992\" id:\"6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992\" pid:4699 exited_at:{seconds:1748320069 nanos:887803895}" May 27 04:27:49.888779 containerd[1585]: time="2025-05-27T04:27:49.888589631Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992\" id:\"6b5e1a9b0932642fe34339b09cada860144b169acb569e7f45283d3ce5d05992\" pid:4699 exited_at:{seconds:1748320069 nanos:887803895}" May 27 04:27:50.168693 sshd[4593]: Accepted publickey for core from 139.178.68.195 port 54180 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:50.170971 sshd-session[4593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:50.180250 systemd-logind[1573]: New session 29 of user core. May 27 04:27:50.183655 systemd[1]: Started session-29.scope - Session 29 of User core. May 27 04:27:50.353945 kubelet[2884]: E0527 04:27:50.353863 2884 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 04:27:50.787058 containerd[1585]: time="2025-05-27T04:27:50.785498770Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 04:27:50.792570 sshd[4733]: Connection closed by 139.178.68.195 port 54180 May 27 04:27:50.796661 sshd-session[4593]: pam_unix(sshd:session): session closed for user core May 27 04:27:50.812772 containerd[1585]: time="2025-05-27T04:27:50.811363558Z" level=info msg="Container 58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b: CDI devices from CRI Config.CDIDevices: []" May 27 04:27:50.816010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255646406.mount: Deactivated successfully. May 27 04:27:50.822388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256421630.mount: Deactivated successfully. May 27 04:27:50.823704 systemd[1]: sshd@26-10.244.19.66:22-139.178.68.195:54180.service: Deactivated successfully. May 27 04:27:50.827100 systemd[1]: session-29.scope: Deactivated successfully. May 27 04:27:50.832718 systemd-logind[1573]: Session 29 logged out. Waiting for processes to exit. May 27 04:27:50.837489 systemd-logind[1573]: Removed session 29. May 27 04:27:50.839138 containerd[1585]: time="2025-05-27T04:27:50.839055012Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b\"" May 27 04:27:50.840177 containerd[1585]: time="2025-05-27T04:27:50.840135960Z" level=info msg="StartContainer for \"58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b\"" May 27 04:27:50.844556 containerd[1585]: time="2025-05-27T04:27:50.844491873Z" level=info msg="connecting to shim 58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b" address="unix:///run/containerd/s/7f0aa7f8b183f86ff09ac94384699c04b6055fcbc51eca62164bea56d5c2c26d" protocol=ttrpc version=3 May 27 04:27:50.878113 systemd[1]: Started cri-containerd-58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b.scope - libcontainer container 58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b. May 27 04:27:50.948063 systemd[1]: Started sshd@27-10.244.19.66:22-139.178.68.195:54196.service - OpenSSH per-connection server daemon (139.178.68.195:54196). May 27 04:27:50.953370 containerd[1585]: time="2025-05-27T04:27:50.953297570Z" level=info msg="StartContainer for \"58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b\" returns successfully" May 27 04:27:50.955957 systemd[1]: cri-containerd-58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b.scope: Deactivated successfully. May 27 04:27:50.960962 containerd[1585]: time="2025-05-27T04:27:50.960600060Z" level=info msg="received exit event container_id:\"58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b\" id:\"58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b\" pid:4753 exited_at:{seconds:1748320070 nanos:960082917}" May 27 04:27:50.961660 containerd[1585]: time="2025-05-27T04:27:50.961243417Z" level=info msg="TaskExit event in podsandbox handler container_id:\"58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b\" id:\"58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b\" pid:4753 exited_at:{seconds:1748320070 nanos:960082917}" May 27 04:27:51.207695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58279509529f955300e94e96d737277062ef6ecb019ca9d1de70448019e4d81b-rootfs.mount: Deactivated successfully. May 27 04:27:51.795477 containerd[1585]: time="2025-05-27T04:27:51.794519054Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 04:27:51.818429 containerd[1585]: time="2025-05-27T04:27:51.816083251Z" level=info msg="Container 192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1: CDI devices from CRI Config.CDIDevices: []" May 27 04:27:51.822574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014796891.mount: Deactivated successfully. May 27 04:27:51.833335 containerd[1585]: time="2025-05-27T04:27:51.833282188Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1\"" May 27 04:27:51.835418 containerd[1585]: time="2025-05-27T04:27:51.835344521Z" level=info msg="StartContainer for \"192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1\"" May 27 04:27:51.838321 containerd[1585]: time="2025-05-27T04:27:51.838152035Z" level=info msg="connecting to shim 192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1" address="unix:///run/containerd/s/7f0aa7f8b183f86ff09ac94384699c04b6055fcbc51eca62164bea56d5c2c26d" protocol=ttrpc version=3 May 27 04:27:51.854442 sshd[4768]: Accepted publickey for core from 139.178.68.195 port 54196 ssh2: RSA SHA256:eaUZQaqMkKPp5jWU0A069WbcP/hBT0dWaBlUqWT+u6Q May 27 04:27:51.858276 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 04:27:51.866354 systemd-logind[1573]: New session 30 of user core. May 27 04:27:51.871616 systemd[1]: Started session-30.scope - Session 30 of User core. May 27 04:27:51.884666 systemd[1]: Started cri-containerd-192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1.scope - libcontainer container 192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1. May 27 04:27:51.927171 systemd[1]: cri-containerd-192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1.scope: Deactivated successfully. May 27 04:27:51.930685 containerd[1585]: time="2025-05-27T04:27:51.930639664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1\" id:\"192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1\" pid:4796 exited_at:{seconds:1748320071 nanos:929923132}" May 27 04:27:51.931936 containerd[1585]: time="2025-05-27T04:27:51.931804317Z" level=info msg="received exit event container_id:\"192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1\" id:\"192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1\" pid:4796 exited_at:{seconds:1748320071 nanos:929923132}" May 27 04:27:51.942153 containerd[1585]: time="2025-05-27T04:27:51.942109145Z" level=info msg="StartContainer for \"192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1\" returns successfully" May 27 04:27:51.964909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-192caf109a0889c12a7019a42e769df1c505564c189569c9f4ea0a62007fd9f1-rootfs.mount: Deactivated successfully. May 27 04:27:52.817314 containerd[1585]: time="2025-05-27T04:27:52.816854571Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 04:27:52.837534 containerd[1585]: time="2025-05-27T04:27:52.837458299Z" level=info msg="Container 6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6: CDI devices from CRI Config.CDIDevices: []" May 27 04:27:52.855096 containerd[1585]: time="2025-05-27T04:27:52.854980255Z" level=info msg="CreateContainer within sandbox \"4fd99156840be02830d7bc72c8e5d22dbb519a32952b6df7817e2425c8f4b3b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\"" May 27 04:27:52.857668 containerd[1585]: time="2025-05-27T04:27:52.857625976Z" level=info msg="StartContainer for \"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\"" May 27 04:27:52.859610 containerd[1585]: time="2025-05-27T04:27:52.859538449Z" level=info msg="connecting to shim 6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6" address="unix:///run/containerd/s/7f0aa7f8b183f86ff09ac94384699c04b6055fcbc51eca62164bea56d5c2c26d" protocol=ttrpc version=3 May 27 04:27:52.906990 systemd[1]: Started cri-containerd-6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6.scope - libcontainer container 6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6. May 27 04:27:52.973428 containerd[1585]: time="2025-05-27T04:27:52.973352218Z" level=info msg="StartContainer for \"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\" returns successfully" May 27 04:27:53.117537 containerd[1585]: time="2025-05-27T04:27:53.117314308Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\" id:\"a9762d822d1860e9226d04a63fce8b45ab8eae59023121745e066a62c72b8752\" pid:4872 exited_at:{seconds:1748320073 nanos:116672753}" May 27 04:27:53.787162 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) May 27 04:27:53.866308 kubelet[2884]: I0527 04:27:53.866136 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9n6qk" podStartSLOduration=5.865996388 podStartE2EDuration="5.865996388s" podCreationTimestamp="2025-05-27 04:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 04:27:53.864748166 +0000 UTC m=+158.972529755" watchObservedRunningTime="2025-05-27 04:27:53.865996388 +0000 UTC m=+158.973777969" May 27 04:27:54.749019 containerd[1585]: time="2025-05-27T04:27:54.748954715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\" id:\"0b3195e09b7616218926cb42ec1c3a8a84c8b745c3256e3bd8804c5d28a6326d\" pid:4953 exit_status:1 exited_at:{seconds:1748320074 nanos:748146492}" May 27 04:27:56.921896 containerd[1585]: time="2025-05-27T04:27:56.921829892Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\" id:\"e84d4cd26cd2748d218a2bb280184217ec657a16c5fd44761290ae023d1b5f7f\" pid:5238 exit_status:1 exited_at:{seconds:1748320076 nanos:921465849}" May 27 04:27:57.546202 systemd-networkd[1530]: lxc_health: Link UP May 27 04:27:57.547851 systemd-networkd[1530]: lxc_health: Gained carrier May 27 04:27:59.309610 systemd-networkd[1530]: lxc_health: Gained IPv6LL May 27 04:27:59.410811 containerd[1585]: time="2025-05-27T04:27:59.410740216Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\" id:\"67dc79df795155f1f2bcb2ae43304a081e926f098ae699e37715ff3a707389c5\" pid:5435 exited_at:{seconds:1748320079 nanos:407213349}" May 27 04:28:01.591950 containerd[1585]: time="2025-05-27T04:28:01.591755762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\" id:\"1cd417c086622f6d811ce660e0858850ab87e9432c39b39fdb9a770eeb8f01eb\" pid:5462 exited_at:{seconds:1748320081 nanos:589369120}" May 27 04:28:03.858072 containerd[1585]: time="2025-05-27T04:28:03.858019686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6602c0ed68763f9e03e705949c6a292ed9930ada9b9b7db3f21e691bdaa2e3c6\" id:\"98fc93d2fd18105fe286f0c7da41386aa1969f79745b022ab284ce37ae85e442\" pid:5495 exited_at:{seconds:1748320083 nanos:856868382}" May 27 04:28:04.033458 sshd[4794]: Connection closed by 139.178.68.195 port 54196 May 27 04:28:04.035085 sshd-session[4768]: pam_unix(sshd:session): session closed for user core May 27 04:28:04.049520 systemd[1]: sshd@27-10.244.19.66:22-139.178.68.195:54196.service: Deactivated successfully. May 27 04:28:04.052587 systemd[1]: session-30.scope: Deactivated successfully. May 27 04:28:04.054091 systemd-logind[1573]: Session 30 logged out. Waiting for processes to exit. May 27 04:28:04.057325 systemd-logind[1573]: Removed session 30.