Mar 14 01:20:59.052658 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 01:20:59.052694 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 01:20:59.052708 kernel: BIOS-provided physical RAM map: Mar 14 01:20:59.052724 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 01:20:59.052735 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 01:20:59.052745 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 01:20:59.052757 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 14 01:20:59.052767 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 14 01:20:59.052778 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 01:20:59.052788 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 01:20:59.052799 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 01:20:59.052809 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 01:20:59.052824 kernel: NX (Execute Disable) protection: active Mar 14 01:20:59.052835 kernel: APIC: Static calls initialized Mar 14 01:20:59.052861 kernel: SMBIOS 2.8 present. Mar 14 01:20:59.052874 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 14 01:20:59.052885 kernel: Hypervisor detected: KVM Mar 14 01:20:59.052902 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 01:20:59.052914 kernel: kvm-clock: using sched offset of 4711534690 cycles Mar 14 01:20:59.052926 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 01:20:59.052938 kernel: tsc: Detected 2499.998 MHz processor Mar 14 01:20:59.052950 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 01:20:59.052962 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 01:20:59.052974 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 14 01:20:59.052985 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 01:20:59.052997 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 01:20:59.053013 kernel: Using GB pages for direct mapping Mar 14 01:20:59.053025 kernel: ACPI: Early table checksum verification disabled Mar 14 01:20:59.053037 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 14 01:20:59.053048 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 01:20:59.053060 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 01:20:59.053072 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 01:20:59.053083 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 14 01:20:59.053095 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 01:20:59.053106 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 01:20:59.053122 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 01:20:59.053134 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 01:20:59.053157 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 14 01:20:59.053169 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 14 01:20:59.053181 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 14 01:20:59.053199 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 14 01:20:59.053212 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 14 01:20:59.053228 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 14 01:20:59.053241 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 14 01:20:59.053253 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 14 01:20:59.053265 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 14 01:20:59.053277 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 14 01:20:59.053289 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 14 01:20:59.053301 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 14 01:20:59.053314 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 14 01:20:59.053330 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 14 01:20:59.053342 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 14 01:20:59.053354 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 14 01:20:59.053366 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 14 01:20:59.053378 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 14 01:20:59.053390 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 14 01:20:59.053402 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 14 01:20:59.055700 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 14 01:20:59.055716 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 14 01:20:59.055736 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 14 01:20:59.055749 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 14 01:20:59.055761 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 14 01:20:59.055785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 14 01:20:59.055797 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 14 01:20:59.055809 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 14 01:20:59.055822 kernel: Zone ranges: Mar 14 01:20:59.055857 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 01:20:59.055870 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 14 01:20:59.055888 kernel: Normal empty Mar 14 01:20:59.055900 kernel: Movable zone start for each node Mar 14 01:20:59.055913 kernel: Early memory node ranges Mar 14 01:20:59.055925 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 01:20:59.055937 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 14 01:20:59.055949 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 14 01:20:59.055962 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 01:20:59.055974 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 01:20:59.055986 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 14 01:20:59.055998 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 01:20:59.056015 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 01:20:59.056027 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 01:20:59.056040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 01:20:59.056052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 01:20:59.056064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 01:20:59.056076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 01:20:59.056088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 01:20:59.056100 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 01:20:59.056112 kernel: TSC deadline timer available Mar 14 01:20:59.056129 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 14 01:20:59.056141 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 01:20:59.056153 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 01:20:59.056166 kernel: Booting paravirtualized kernel on KVM Mar 14 01:20:59.056178 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 01:20:59.056191 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 14 01:20:59.056203 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 14 01:20:59.056216 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 14 01:20:59.056228 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 14 01:20:59.056244 kernel: kvm-guest: PV spinlocks enabled Mar 14 01:20:59.056257 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 01:20:59.056270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 01:20:59.056283 kernel: random: crng init done Mar 14 01:20:59.056296 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 01:20:59.056308 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 14 01:20:59.056320 kernel: Fallback order for Node 0: 0 Mar 14 01:20:59.056332 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 14 01:20:59.056349 kernel: Policy zone: DMA32 Mar 14 01:20:59.056362 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 01:20:59.056374 kernel: software IO TLB: area num 16. Mar 14 01:20:59.056386 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194764K reserved, 0K cma-reserved) Mar 14 01:20:59.056399 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 14 01:20:59.056429 kernel: Kernel/User page tables isolation: enabled Mar 14 01:20:59.056445 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 01:20:59.056458 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 01:20:59.056470 kernel: Dynamic Preempt: voluntary Mar 14 01:20:59.056488 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 01:20:59.056502 kernel: rcu: RCU event tracing is enabled. Mar 14 01:20:59.056514 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 14 01:20:59.056527 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 01:20:59.056540 kernel: Rude variant of Tasks RCU enabled. Mar 14 01:20:59.056566 kernel: Tracing variant of Tasks RCU enabled. Mar 14 01:20:59.056579 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 01:20:59.056592 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 14 01:20:59.056605 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 14 01:20:59.056618 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 01:20:59.056630 kernel: Console: colour VGA+ 80x25 Mar 14 01:20:59.056643 kernel: printk: console [tty0] enabled Mar 14 01:20:59.056660 kernel: printk: console [ttyS0] enabled Mar 14 01:20:59.056673 kernel: ACPI: Core revision 20230628 Mar 14 01:20:59.056686 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 01:20:59.056699 kernel: x2apic enabled Mar 14 01:20:59.056711 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 01:20:59.056729 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 14 01:20:59.056742 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 14 01:20:59.056755 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 01:20:59.056768 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 14 01:20:59.056781 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 14 01:20:59.056793 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 01:20:59.056806 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 01:20:59.056818 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 01:20:59.056831 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 14 01:20:59.056854 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 14 01:20:59.056874 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 14 01:20:59.056886 kernel: MDS: Mitigation: Clear CPU buffers Mar 14 01:20:59.056899 kernel: MMIO Stale Data: Unknown: No mitigations Mar 14 01:20:59.056912 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 14 01:20:59.056924 kernel: active return thunk: its_return_thunk Mar 14 01:20:59.056937 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 14 01:20:59.056950 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 01:20:59.056963 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 01:20:59.056975 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 01:20:59.056988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 01:20:59.057000 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 14 01:20:59.057018 kernel: Freeing SMP alternatives memory: 32K Mar 14 01:20:59.057031 kernel: pid_max: default: 32768 minimum: 301 Mar 14 01:20:59.057043 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 01:20:59.057056 kernel: landlock: Up and running. Mar 14 01:20:59.057068 kernel: SELinux: Initializing. Mar 14 01:20:59.057081 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 01:20:59.057094 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 14 01:20:59.057107 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 14 01:20:59.057120 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 14 01:20:59.057133 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 14 01:20:59.057150 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 14 01:20:59.057164 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 14 01:20:59.057177 kernel: signal: max sigframe size: 1776 Mar 14 01:20:59.057190 kernel: rcu: Hierarchical SRCU implementation. Mar 14 01:20:59.057203 kernel: rcu: Max phase no-delay instances is 400. Mar 14 01:20:59.057216 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 01:20:59.057229 kernel: smp: Bringing up secondary CPUs ... Mar 14 01:20:59.057241 kernel: smpboot: x86: Booting SMP configuration: Mar 14 01:20:59.057254 kernel: .... node #0, CPUs: #1 Mar 14 01:20:59.057271 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 14 01:20:59.057284 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 01:20:59.057297 kernel: smpboot: Max logical packages: 16 Mar 14 01:20:59.057310 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 14 01:20:59.057323 kernel: devtmpfs: initialized Mar 14 01:20:59.057336 kernel: x86/mm: Memory block size: 128MB Mar 14 01:20:59.057349 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 01:20:59.057362 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 14 01:20:59.057375 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 01:20:59.057392 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 01:20:59.057405 kernel: audit: initializing netlink subsys (disabled) Mar 14 01:20:59.058512 kernel: audit: type=2000 audit(1773451257.443:1): state=initialized audit_enabled=0 res=1 Mar 14 01:20:59.058529 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 01:20:59.058542 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 01:20:59.058555 kernel: cpuidle: using governor menu Mar 14 01:20:59.058568 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 01:20:59.058581 kernel: dca service started, version 1.12.1 Mar 14 01:20:59.058594 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 01:20:59.058615 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 01:20:59.058628 kernel: PCI: Using configuration type 1 for base access Mar 14 01:20:59.058641 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 01:20:59.058654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 01:20:59.058667 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 01:20:59.058680 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 01:20:59.058693 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 01:20:59.058706 kernel: ACPI: Added _OSI(Module Device) Mar 14 01:20:59.058719 kernel: ACPI: Added _OSI(Processor Device) Mar 14 01:20:59.058737 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 01:20:59.058750 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 01:20:59.058763 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 01:20:59.058776 kernel: ACPI: Interpreter enabled Mar 14 01:20:59.058789 kernel: ACPI: PM: (supports S0 S5) Mar 14 01:20:59.058802 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 01:20:59.058814 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 01:20:59.058828 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 01:20:59.058840 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 01:20:59.058871 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 01:20:59.059146 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 01:20:59.059340 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 01:20:59.061022 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 01:20:59.061045 kernel: PCI host bridge to bus 0000:00 Mar 14 01:20:59.061245 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 01:20:59.061514 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 01:20:59.061700 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 01:20:59.061875 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 14 01:20:59.062035 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 01:20:59.062198 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 14 01:20:59.062360 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 01:20:59.062588 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 01:20:59.062804 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 14 01:20:59.063001 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 14 01:20:59.063199 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 14 01:20:59.063378 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 14 01:20:59.065094 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 01:20:59.065312 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.065538 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 14 01:20:59.065756 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.065954 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 14 01:20:59.066164 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.066347 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 14 01:20:59.066575 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.066759 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 14 01:20:59.066974 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.067156 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 14 01:20:59.067355 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.067557 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 14 01:20:59.067770 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.067965 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 14 01:20:59.068165 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 01:20:59.068346 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 14 01:20:59.069623 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 14 01:20:59.069823 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 01:20:59.070026 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 14 01:20:59.070207 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 14 01:20:59.070385 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 14 01:20:59.070677 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 14 01:20:59.070876 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 01:20:59.071056 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 14 01:20:59.071232 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 14 01:20:59.076547 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 01:20:59.076761 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 01:20:59.076977 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 01:20:59.077167 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 14 01:20:59.077345 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 14 01:20:59.079589 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 01:20:59.079783 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 01:20:59.080002 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 14 01:20:59.080200 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 14 01:20:59.080407 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 14 01:20:59.081657 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 14 01:20:59.081868 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 14 01:20:59.082079 kernel: pci_bus 0000:02: extended config space not accessible Mar 14 01:20:59.082300 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 14 01:20:59.083544 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 14 01:20:59.083744 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 14 01:20:59.083948 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 14 01:20:59.084155 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 01:20:59.084343 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 14 01:20:59.085340 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 14 01:20:59.085583 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 14 01:20:59.085764 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 14 01:20:59.085975 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 01:20:59.086170 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 14 01:20:59.086362 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 14 01:20:59.086564 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 14 01:20:59.086740 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 14 01:20:59.086934 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 14 01:20:59.087110 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 14 01:20:59.087287 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 14 01:20:59.090531 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 14 01:20:59.090713 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 14 01:20:59.090904 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 14 01:20:59.091088 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 14 01:20:59.091269 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 14 01:20:59.091508 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 14 01:20:59.091694 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 14 01:20:59.091881 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 14 01:20:59.092068 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 14 01:20:59.092249 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 14 01:20:59.094462 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 14 01:20:59.094663 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 14 01:20:59.094685 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 01:20:59.094698 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 01:20:59.094712 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 01:20:59.094725 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 01:20:59.094738 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 01:20:59.094760 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 01:20:59.094773 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 01:20:59.094786 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 01:20:59.094799 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 01:20:59.094812 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 01:20:59.094825 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 01:20:59.094838 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 01:20:59.094862 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 01:20:59.094875 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 01:20:59.094894 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 01:20:59.094907 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 01:20:59.094920 kernel: iommu: Default domain type: Translated Mar 14 01:20:59.094933 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 01:20:59.094946 kernel: PCI: Using ACPI for IRQ routing Mar 14 01:20:59.094959 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 01:20:59.094972 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 01:20:59.094985 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 14 01:20:59.095176 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 01:20:59.095377 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 01:20:59.095615 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 01:20:59.095637 kernel: vgaarb: loaded Mar 14 01:20:59.095650 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 01:20:59.095664 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 01:20:59.095677 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 01:20:59.095690 kernel: pnp: PnP ACPI init Mar 14 01:20:59.095891 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 01:20:59.095921 kernel: pnp: PnP ACPI: found 5 devices Mar 14 01:20:59.095934 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 01:20:59.095948 kernel: NET: Registered PF_INET protocol family Mar 14 01:20:59.095961 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 01:20:59.095974 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 14 01:20:59.095988 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 01:20:59.096001 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 14 01:20:59.096014 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 14 01:20:59.096032 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 14 01:20:59.096046 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 01:20:59.096059 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 14 01:20:59.096072 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 01:20:59.096085 kernel: NET: Registered PF_XDP protocol family Mar 14 01:20:59.096260 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 14 01:20:59.098474 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 14 01:20:59.098662 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 14 01:20:59.098865 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 14 01:20:59.099045 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 14 01:20:59.099221 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 01:20:59.099397 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 01:20:59.099600 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 01:20:59.099778 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 01:20:59.099976 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 01:20:59.100152 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 01:20:59.100329 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 14 01:20:59.102548 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 14 01:20:59.102733 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 14 01:20:59.102925 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 14 01:20:59.103102 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 14 01:20:59.103308 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 14 01:20:59.103576 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 14 01:20:59.103753 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 14 01:20:59.103944 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 14 01:20:59.104120 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 14 01:20:59.104294 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 14 01:20:59.105517 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 14 01:20:59.105698 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 14 01:20:59.105886 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 14 01:20:59.106062 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 14 01:20:59.106249 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 14 01:20:59.106455 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 14 01:20:59.106634 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 14 01:20:59.106820 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 14 01:20:59.107007 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 14 01:20:59.107190 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 14 01:20:59.107366 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 14 01:20:59.110582 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 14 01:20:59.110765 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 14 01:20:59.110955 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 14 01:20:59.111132 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 14 01:20:59.111309 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 14 01:20:59.111520 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 14 01:20:59.111703 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 14 01:20:59.111916 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 14 01:20:59.112109 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 14 01:20:59.112302 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 14 01:20:59.114534 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 14 01:20:59.114731 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 14 01:20:59.114947 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 14 01:20:59.115141 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 14 01:20:59.115334 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 14 01:20:59.115543 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 14 01:20:59.115737 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 14 01:20:59.115934 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 01:20:59.116219 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 01:20:59.117540 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 01:20:59.117735 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 14 01:20:59.117943 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 01:20:59.118122 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 14 01:20:59.118322 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 14 01:20:59.119556 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 14 01:20:59.119731 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 14 01:20:59.119931 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 14 01:20:59.120113 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 14 01:20:59.120291 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 14 01:20:59.121538 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 14 01:20:59.121723 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 14 01:20:59.121904 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 14 01:20:59.122073 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 14 01:20:59.122252 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 14 01:20:59.122458 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 14 01:20:59.122630 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 14 01:20:59.122826 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 14 01:20:59.123010 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 14 01:20:59.123179 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 14 01:20:59.123357 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 14 01:20:59.123570 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 14 01:20:59.123747 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 14 01:20:59.123938 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 14 01:20:59.124107 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 14 01:20:59.124272 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 14 01:20:59.124475 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 14 01:20:59.124645 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 14 01:20:59.124809 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 14 01:20:59.124838 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 01:20:59.124865 kernel: PCI: CLS 0 bytes, default 64 Mar 14 01:20:59.124879 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 14 01:20:59.124893 kernel: software IO TLB: mapped [mem 0x0000000071000000-0x0000000075000000] (64MB) Mar 14 01:20:59.124907 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 14 01:20:59.124921 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 14 01:20:59.124935 kernel: Initialise system trusted keyrings Mar 14 01:20:59.124949 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 14 01:20:59.124968 kernel: Key type asymmetric registered Mar 14 01:20:59.124982 kernel: Asymmetric key parser 'x509' registered Mar 14 01:20:59.124995 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 01:20:59.125009 kernel: io scheduler mq-deadline registered Mar 14 01:20:59.125023 kernel: io scheduler kyber registered Mar 14 01:20:59.125036 kernel: io scheduler bfq registered Mar 14 01:20:59.125217 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 14 01:20:59.125398 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 14 01:20:59.125603 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.125791 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 14 01:20:59.125981 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 14 01:20:59.126157 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.126335 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 14 01:20:59.126541 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 14 01:20:59.126719 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.126921 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 14 01:20:59.127099 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 14 01:20:59.127277 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.127510 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 14 01:20:59.127689 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 14 01:20:59.127877 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.128065 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 14 01:20:59.128241 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 14 01:20:59.128438 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.128623 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 14 01:20:59.128798 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 14 01:20:59.128993 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.129182 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 14 01:20:59.129360 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 14 01:20:59.129588 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 01:20:59.129611 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 01:20:59.129626 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 01:20:59.129640 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 01:20:59.129654 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 01:20:59.129675 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 01:20:59.129689 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 01:20:59.129703 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 01:20:59.129717 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 01:20:59.129912 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 14 01:20:59.129935 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 01:20:59.130096 kernel: rtc_cmos 00:03: registered as rtc0 Mar 14 01:20:59.130262 kernel: rtc_cmos 00:03: setting system clock to 2026-03-14T01:20:58 UTC (1773451258) Mar 14 01:20:59.130463 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 14 01:20:59.130485 kernel: intel_pstate: CPU model not supported Mar 14 01:20:59.130499 kernel: NET: Registered PF_INET6 protocol family Mar 14 01:20:59.130513 kernel: Segment Routing with IPv6 Mar 14 01:20:59.130526 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 01:20:59.130541 kernel: NET: Registered PF_PACKET protocol family Mar 14 01:20:59.130554 kernel: Key type dns_resolver registered Mar 14 01:20:59.130568 kernel: IPI shorthand broadcast: enabled Mar 14 01:20:59.130581 kernel: sched_clock: Marking stable (1302004232, 232389241)->(1666046033, -131652560) Mar 14 01:20:59.130603 kernel: registered taskstats version 1 Mar 14 01:20:59.130617 kernel: Loading compiled-in X.509 certificates Mar 14 01:20:59.130631 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 01:20:59.130645 kernel: Key type .fscrypt registered Mar 14 01:20:59.130658 kernel: Key type fscrypt-provisioning registered Mar 14 01:20:59.130671 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 01:20:59.130685 kernel: ima: Allocated hash algorithm: sha1 Mar 14 01:20:59.130698 kernel: ima: No architecture policies found Mar 14 01:20:59.130712 kernel: clk: Disabling unused clocks Mar 14 01:20:59.130731 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 01:20:59.130745 kernel: Write protecting the kernel read-only data: 36864k Mar 14 01:20:59.130758 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 01:20:59.130772 kernel: Run /init as init process Mar 14 01:20:59.130785 kernel: with arguments: Mar 14 01:20:59.130799 kernel: /init Mar 14 01:20:59.130812 kernel: with environment: Mar 14 01:20:59.130825 kernel: HOME=/ Mar 14 01:20:59.130839 kernel: TERM=linux Mar 14 01:20:59.130874 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 01:20:59.130892 systemd[1]: Detected virtualization kvm. Mar 14 01:20:59.130907 systemd[1]: Detected architecture x86-64. Mar 14 01:20:59.130921 systemd[1]: Running in initrd. Mar 14 01:20:59.130935 systemd[1]: No hostname configured, using default hostname. Mar 14 01:20:59.130949 systemd[1]: Hostname set to . Mar 14 01:20:59.130964 systemd[1]: Initializing machine ID from VM UUID. Mar 14 01:20:59.130983 systemd[1]: Queued start job for default target initrd.target. Mar 14 01:20:59.130998 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 01:20:59.131013 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 01:20:59.131028 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 01:20:59.131043 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 01:20:59.131058 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 01:20:59.131073 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 01:20:59.131094 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 01:20:59.131110 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 01:20:59.131124 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 01:20:59.131139 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 01:20:59.131154 systemd[1]: Reached target paths.target - Path Units. Mar 14 01:20:59.131174 systemd[1]: Reached target slices.target - Slice Units. Mar 14 01:20:59.131189 systemd[1]: Reached target swap.target - Swaps. Mar 14 01:20:59.131203 systemd[1]: Reached target timers.target - Timer Units. Mar 14 01:20:59.131223 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 01:20:59.131238 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 01:20:59.131252 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 01:20:59.131267 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 01:20:59.131282 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 01:20:59.131297 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 01:20:59.131311 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 01:20:59.131326 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 01:20:59.131340 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 01:20:59.131361 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 01:20:59.131375 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 01:20:59.131390 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 01:20:59.131404 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 01:20:59.131460 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 01:20:59.131477 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 01:20:59.131540 systemd-journald[203]: Collecting audit messages is disabled. Mar 14 01:20:59.131581 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 01:20:59.131596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 01:20:59.131611 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 01:20:59.131632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 01:20:59.131647 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 01:20:59.131661 kernel: Bridge firewalling registered Mar 14 01:20:59.131676 systemd-journald[203]: Journal started Mar 14 01:20:59.131709 systemd-journald[203]: Runtime Journal (/run/log/journal/29987c58ad844c5b940e5899a24877d2) is 4.7M, max 38.0M, 33.2M free. Mar 14 01:20:59.081563 systemd-modules-load[204]: Inserted module 'overlay' Mar 14 01:20:59.119490 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 14 01:20:59.172733 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 01:20:59.174170 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 01:20:59.175218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 01:20:59.176862 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 01:20:59.185659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 01:20:59.197719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 01:20:59.204592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 01:20:59.220690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 01:20:59.224658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 01:20:59.229799 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 01:20:59.239587 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 01:20:59.241290 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 01:20:59.243068 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 01:20:59.249670 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 01:20:59.261953 dracut-cmdline[233]: dracut-dracut-053 Mar 14 01:20:59.270131 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 01:20:59.300997 systemd-resolved[237]: Positive Trust Anchors: Mar 14 01:20:59.301013 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 01:20:59.301064 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 01:20:59.310509 systemd-resolved[237]: Defaulting to hostname 'linux'. Mar 14 01:20:59.313310 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 01:20:59.314500 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 01:20:59.375467 kernel: SCSI subsystem initialized Mar 14 01:20:59.387481 kernel: Loading iSCSI transport class v2.0-870. Mar 14 01:20:59.400435 kernel: iscsi: registered transport (tcp) Mar 14 01:20:59.427489 kernel: iscsi: registered transport (qla4xxx) Mar 14 01:20:59.427583 kernel: QLogic iSCSI HBA Driver Mar 14 01:20:59.482591 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 01:20:59.497644 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 01:20:59.543515 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 01:20:59.543613 kernel: device-mapper: uevent: version 1.0.3 Mar 14 01:20:59.547464 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 01:20:59.596481 kernel: raid6: sse2x4 gen() 7832 MB/s Mar 14 01:20:59.614480 kernel: raid6: sse2x2 gen() 5497 MB/s Mar 14 01:20:59.633082 kernel: raid6: sse2x1 gen() 5424 MB/s Mar 14 01:20:59.633153 kernel: raid6: using algorithm sse2x4 gen() 7832 MB/s Mar 14 01:20:59.652075 kernel: raid6: .... xor() 5052 MB/s, rmw enabled Mar 14 01:20:59.652148 kernel: raid6: using ssse3x2 recovery algorithm Mar 14 01:20:59.678482 kernel: xor: automatically using best checksumming function avx Mar 14 01:20:59.877464 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 01:20:59.893096 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 01:20:59.901707 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 01:20:59.921898 systemd-udevd[420]: Using default interface naming scheme 'v255'. Mar 14 01:20:59.929307 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 01:20:59.936646 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 01:20:59.961826 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 14 01:21:00.003119 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 01:21:00.014710 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 01:21:00.138189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 01:21:00.145605 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 01:21:00.176287 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 01:21:00.178337 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 01:21:00.180830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 01:21:00.183063 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 01:21:00.192646 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 01:21:00.214142 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 01:21:00.268461 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 14 01:21:00.280596 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 14 01:21:00.294440 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 01:21:00.311728 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 01:21:00.311790 kernel: GPT:17805311 != 125829119 Mar 14 01:21:00.311822 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 01:21:00.311842 kernel: GPT:17805311 != 125829119 Mar 14 01:21:00.311872 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 01:21:00.311891 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 01:21:00.336443 kernel: ACPI: bus type USB registered Mar 14 01:21:00.337918 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 01:21:00.339739 kernel: usbcore: registered new interface driver usbfs Mar 14 01:21:00.338114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 01:21:00.347867 kernel: usbcore: registered new interface driver hub Mar 14 01:21:00.347894 kernel: usbcore: registered new device driver usb Mar 14 01:21:00.347640 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 01:21:00.348789 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 01:21:00.349575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 01:21:00.351560 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 01:21:00.359797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 01:21:00.384440 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (482) Mar 14 01:21:00.410449 kernel: libata version 3.00 loaded. Mar 14 01:21:00.418510 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 14 01:21:00.418831 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 14 01:21:00.419495 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 01:21:00.423543 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 14 01:21:00.423791 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 14 01:21:00.424037 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 14 01:21:00.425453 kernel: hub 1-0:1.0: USB hub found Mar 14 01:21:00.425717 kernel: hub 1-0:1.0: 4 ports detected Mar 14 01:21:00.425949 kernel: AVX version of gcm_enc/dec engaged. Mar 14 01:21:00.425970 kernel: AES CTR mode by8 optimization enabled Mar 14 01:21:00.426439 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 01:21:00.426695 kernel: hub 2-0:1.0: USB hub found Mar 14 01:21:00.426942 kernel: hub 2-0:1.0: 4 ports detected Mar 14 01:21:00.428432 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 01:21:00.429725 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 01:21:00.552951 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 01:21:00.552996 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Mar 14 01:21:00.553016 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 01:21:00.553356 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 01:21:00.553647 kernel: scsi host0: ahci Mar 14 01:21:00.553910 kernel: scsi host1: ahci Mar 14 01:21:00.554120 kernel: scsi host2: ahci Mar 14 01:21:00.554334 kernel: scsi host3: ahci Mar 14 01:21:00.554575 kernel: scsi host4: ahci Mar 14 01:21:00.554798 kernel: scsi host5: ahci Mar 14 01:21:00.555015 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 14 01:21:00.555044 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 14 01:21:00.555063 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 14 01:21:00.555081 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 14 01:21:00.555099 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 14 01:21:00.555117 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 14 01:21:00.554307 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 01:21:00.555613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 01:21:00.565169 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 01:21:00.578056 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 01:21:00.585309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 01:21:00.593718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 01:21:00.598636 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 01:21:00.606618 disk-uuid[556]: Primary Header is updated. Mar 14 01:21:00.606618 disk-uuid[556]: Secondary Entries is updated. Mar 14 01:21:00.606618 disk-uuid[556]: Secondary Header is updated. Mar 14 01:21:00.612453 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 01:21:00.622467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 01:21:00.629897 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 01:21:00.633382 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 01:21:00.664604 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 01:21:00.766475 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 01:21:00.774244 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 14 01:21:00.774293 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 01:21:00.774432 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 01:21:00.778460 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 01:21:00.780515 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 01:21:00.838449 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 01:21:00.849679 kernel: usbcore: registered new interface driver usbhid Mar 14 01:21:00.849779 kernel: usbhid: USB HID core driver Mar 14 01:21:00.859481 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 14 01:21:00.864439 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 14 01:21:01.627454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 01:21:01.629488 disk-uuid[557]: The operation has completed successfully. Mar 14 01:21:01.679131 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 01:21:01.679408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 01:21:01.710645 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 01:21:01.725840 sh[586]: Success Mar 14 01:21:01.744445 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 14 01:21:01.820642 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 01:21:01.823594 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 01:21:01.827362 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 01:21:01.854637 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 01:21:01.854713 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 01:21:01.856698 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 01:21:01.858870 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 01:21:01.860525 kernel: BTRFS info (device dm-0): using free space tree Mar 14 01:21:01.878606 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 01:21:01.880266 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 01:21:01.886644 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 01:21:01.889603 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 01:21:01.906904 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 01:21:01.906977 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 01:21:01.908686 kernel: BTRFS info (device vda6): using free space tree Mar 14 01:21:01.915682 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 01:21:01.929972 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 01:21:01.933341 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 01:21:01.939548 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 01:21:01.947672 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 01:21:02.089409 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 01:21:02.097402 ignition[678]: Ignition 2.19.0 Mar 14 01:21:02.097454 ignition[678]: Stage: fetch-offline Mar 14 01:21:02.100695 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 01:21:02.097540 ignition[678]: no configs at "/usr/lib/ignition/base.d" Mar 14 01:21:02.102528 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 01:21:02.097566 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 01:21:02.097777 ignition[678]: parsed url from cmdline: "" Mar 14 01:21:02.097784 ignition[678]: no config URL provided Mar 14 01:21:02.097795 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 01:21:02.097812 ignition[678]: no config at "/usr/lib/ignition/user.ign" Mar 14 01:21:02.097821 ignition[678]: failed to fetch config: resource requires networking Mar 14 01:21:02.098389 ignition[678]: Ignition finished successfully Mar 14 01:21:02.141777 systemd-networkd[773]: lo: Link UP Mar 14 01:21:02.141795 systemd-networkd[773]: lo: Gained carrier Mar 14 01:21:02.144462 systemd-networkd[773]: Enumeration completed Mar 14 01:21:02.145110 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 01:21:02.145116 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 01:21:02.146857 systemd-networkd[773]: eth0: Link UP Mar 14 01:21:02.146863 systemd-networkd[773]: eth0: Gained carrier Mar 14 01:21:02.146875 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 01:21:02.147639 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 01:21:02.151802 systemd[1]: Reached target network.target - Network. Mar 14 01:21:02.159663 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 01:21:02.175794 systemd-networkd[773]: eth0: DHCPv4 address 10.230.8.14/30, gateway 10.230.8.13 acquired from 10.230.8.13 Mar 14 01:21:02.184095 ignition[776]: Ignition 2.19.0 Mar 14 01:21:02.184121 ignition[776]: Stage: fetch Mar 14 01:21:02.184483 ignition[776]: no configs at "/usr/lib/ignition/base.d" Mar 14 01:21:02.184519 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 01:21:02.184665 ignition[776]: parsed url from cmdline: "" Mar 14 01:21:02.184672 ignition[776]: no config URL provided Mar 14 01:21:02.184682 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 01:21:02.184705 ignition[776]: no config at "/usr/lib/ignition/user.ign" Mar 14 01:21:02.184947 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 14 01:21:02.185899 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 14 01:21:02.185926 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 14 01:21:02.200662 ignition[776]: GET result: OK Mar 14 01:21:02.200807 ignition[776]: parsing config with SHA512: 9d5d6477b300eba2c6865f5bd77e26bfad9b170fdd45e1cd9136b013c3c7e90332f6b8f5629e92666855b652decd98d948e65c86c7844d668220bb3428c64420 Mar 14 01:21:02.205791 unknown[776]: fetched base config from "system" Mar 14 01:21:02.205808 unknown[776]: fetched base config from "system" Mar 14 01:21:02.206523 ignition[776]: fetch: fetch complete Mar 14 01:21:02.205818 unknown[776]: fetched user config from "openstack" Mar 14 01:21:02.206532 ignition[776]: fetch: fetch passed Mar 14 01:21:02.206598 ignition[776]: Ignition finished successfully Mar 14 01:21:02.209512 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 01:21:02.218697 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 01:21:02.237665 ignition[783]: Ignition 2.19.0 Mar 14 01:21:02.237687 ignition[783]: Stage: kargs Mar 14 01:21:02.237949 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 14 01:21:02.237971 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 01:21:02.240765 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 01:21:02.239233 ignition[783]: kargs: kargs passed Mar 14 01:21:02.239305 ignition[783]: Ignition finished successfully Mar 14 01:21:02.248640 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 01:21:02.271011 ignition[789]: Ignition 2.19.0 Mar 14 01:21:02.271033 ignition[789]: Stage: disks Mar 14 01:21:02.271281 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 14 01:21:02.274835 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 01:21:02.271302 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 01:21:02.277103 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 01:21:02.272511 ignition[789]: disks: disks passed Mar 14 01:21:02.278536 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 01:21:02.272582 ignition[789]: Ignition finished successfully Mar 14 01:21:02.279662 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 01:21:02.281192 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 01:21:02.282468 systemd[1]: Reached target basic.target - Basic System. Mar 14 01:21:02.290656 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 01:21:02.317580 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 01:21:02.322124 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 01:21:02.328562 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 01:21:02.450498 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 01:21:02.451338 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 01:21:02.452798 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 01:21:02.466583 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 01:21:02.470505 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 01:21:02.472724 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 01:21:02.480660 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 14 01:21:02.489442 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Mar 14 01:21:02.489479 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 01:21:02.489500 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 01:21:02.489518 kernel: BTRFS info (device vda6): using free space tree Mar 14 01:21:02.483522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 01:21:02.483571 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 01:21:02.495667 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 01:21:02.500197 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 01:21:02.501136 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 01:21:02.516905 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 01:21:02.584824 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 01:21:02.594432 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Mar 14 01:21:02.602882 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 01:21:02.613434 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 01:21:02.715573 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 01:21:02.721534 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 01:21:02.724037 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 01:21:02.739449 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 01:21:02.764510 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 01:21:02.774811 ignition[922]: INFO : Ignition 2.19.0 Mar 14 01:21:02.777487 ignition[922]: INFO : Stage: mount Mar 14 01:21:02.777487 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 01:21:02.777487 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 01:21:02.777487 ignition[922]: INFO : mount: mount passed Mar 14 01:21:02.777487 ignition[922]: INFO : Ignition finished successfully Mar 14 01:21:02.780736 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 01:21:02.852897 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 01:21:04.043745 systemd-networkd[773]: eth0: Gained IPv6LL Mar 14 01:21:05.550406 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8203:24:19ff:fee6:80e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8203:24:19ff:fee6:80e/64 assigned by NDisc. Mar 14 01:21:05.550445 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 14 01:21:09.661309 coreos-metadata[807]: Mar 14 01:21:09.661 WARN failed to locate config-drive, using the metadata service API instead Mar 14 01:21:09.685663 coreos-metadata[807]: Mar 14 01:21:09.685 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 14 01:21:09.702587 coreos-metadata[807]: Mar 14 01:21:09.701 INFO Fetch successful Mar 14 01:21:09.703640 coreos-metadata[807]: Mar 14 01:21:09.703 INFO wrote hostname srv-ouubu.gb1.brightbox.com to /sysroot/etc/hostname Mar 14 01:21:09.707360 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 14 01:21:09.707582 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 14 01:21:09.714531 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 01:21:09.742680 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 01:21:09.758439 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Mar 14 01:21:09.758500 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 01:21:09.760450 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 01:21:09.763315 kernel: BTRFS info (device vda6): using free space tree Mar 14 01:21:09.767443 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 01:21:09.771039 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 01:21:09.801204 ignition[956]: INFO : Ignition 2.19.0 Mar 14 01:21:09.803432 ignition[956]: INFO : Stage: files Mar 14 01:21:09.803432 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 01:21:09.803432 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 01:21:09.806279 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Mar 14 01:21:09.807797 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 01:21:09.807797 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 01:21:09.812366 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 01:21:09.813426 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 01:21:09.814398 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 01:21:09.814349 unknown[956]: wrote ssh authorized keys file for user: core Mar 14 01:21:09.816753 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 01:21:09.818010 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 01:21:09.983776 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 01:21:10.385718 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 01:21:10.385718 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 01:21:10.388670 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 01:21:10.687686 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 01:21:11.105615 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 01:21:11.105615 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 01:21:11.108979 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 14 01:21:11.515199 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 01:21:15.274380 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 14 01:21:15.274380 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 01:21:15.280074 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 01:21:15.280074 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 01:21:15.280074 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 01:21:15.280074 ignition[956]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 14 01:21:15.280074 ignition[956]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 01:21:15.280074 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 01:21:15.280074 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 01:21:15.280074 ignition[956]: INFO : files: files passed Mar 14 01:21:15.280074 ignition[956]: INFO : Ignition finished successfully Mar 14 01:21:15.282552 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 01:21:15.294738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 01:21:15.302664 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 01:21:15.310042 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 01:21:15.310372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 01:21:15.320524 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 01:21:15.323293 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 01:21:15.324371 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 01:21:15.327018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 01:21:15.328685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 01:21:15.345701 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 01:21:15.391241 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 01:21:15.392473 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 01:21:15.393850 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 01:21:15.395234 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 01:21:15.396905 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 01:21:15.410070 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 01:21:15.428067 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 01:21:15.436674 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 01:21:15.450677 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 01:21:15.451603 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 01:21:15.453280 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 01:21:15.454841 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 01:21:15.455007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 01:21:15.456952 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 01:21:15.458008 systemd[1]: Stopped target basic.target - Basic System. Mar 14 01:21:15.459582 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 01:21:15.460949 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 01:21:15.462486 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 01:21:15.464004 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 01:21:15.465658 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 01:21:15.467226 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 01:21:15.468758 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 01:21:15.470305 systemd[1]: Stopped target swap.target - Swaps. Mar 14 01:21:15.471722 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 01:21:15.471904 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 01:21:15.473593 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 01:21:15.474553 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 01:21:15.475985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 01:21:15.476163 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 01:21:15.477717 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 01:21:15.477892 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 01:21:15.480084 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 01:21:15.480299 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 01:21:15.481494 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 01:21:15.481724 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 01:21:15.488682 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 01:21:15.491532 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 01:21:15.494113 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 01:21:15.494298 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 01:21:15.498110 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 01:21:15.499248 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 01:21:15.513852 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 01:21:15.514017 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 01:21:15.521911 ignition[1009]: INFO : Ignition 2.19.0 Mar 14 01:21:15.521911 ignition[1009]: INFO : Stage: umount Mar 14 01:21:15.521911 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 01:21:15.521911 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 14 01:21:15.526766 ignition[1009]: INFO : umount: umount passed Mar 14 01:21:15.526766 ignition[1009]: INFO : Ignition finished successfully Mar 14 01:21:15.525485 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 01:21:15.527481 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 01:21:15.528823 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 01:21:15.528979 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 01:21:15.531901 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 01:21:15.532007 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 01:21:15.533986 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 01:21:15.534064 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 01:21:15.535554 systemd[1]: Stopped target network.target - Network. Mar 14 01:21:15.536229 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 01:21:15.536335 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 01:21:15.538255 systemd[1]: Stopped target paths.target - Path Units. Mar 14 01:21:15.540502 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 01:21:15.544610 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 01:21:15.545717 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 01:21:15.547085 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 01:21:15.548763 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 01:21:15.548829 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 01:21:15.550437 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 01:21:15.550505 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 01:21:15.551830 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 01:21:15.551913 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 01:21:15.553305 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 01:21:15.553386 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 01:21:15.554940 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 01:21:15.557971 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 01:21:15.558654 systemd-networkd[773]: eth0: DHCPv6 lease lost Mar 14 01:21:15.565679 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 01:21:15.566713 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 01:21:15.568342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 01:21:15.569721 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 01:21:15.569906 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 01:21:15.573080 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 01:21:15.573285 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 01:21:15.575944 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 01:21:15.576094 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 01:21:15.577290 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 01:21:15.577398 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 01:21:15.590141 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 01:21:15.590895 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 01:21:15.590975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 01:21:15.592142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 01:21:15.592211 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 01:21:15.593928 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 01:21:15.594010 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 01:21:15.595571 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 01:21:15.595640 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 01:21:15.597612 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 01:21:15.609815 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 01:21:15.610579 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 01:21:15.612381 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 01:21:15.612704 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 01:21:15.615324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 01:21:15.615857 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 01:21:15.616664 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 01:21:15.616719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 01:21:15.618138 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 01:21:15.618207 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 01:21:15.620458 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 01:21:15.620592 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 01:21:15.622684 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 01:21:15.622786 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 01:21:15.635322 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 01:21:15.637673 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 01:21:15.637792 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 01:21:15.641132 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 01:21:15.641229 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 01:21:15.642104 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 01:21:15.642180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 01:21:15.643901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 01:21:15.643992 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 01:21:15.648911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 01:21:15.649880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 01:21:15.652017 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 01:21:15.659632 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 01:21:15.671012 systemd[1]: Switching root. Mar 14 01:21:15.707113 systemd-journald[203]: Journal stopped Mar 14 01:21:17.300305 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 14 01:21:17.300551 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 01:21:17.300588 kernel: SELinux: policy capability open_perms=1 Mar 14 01:21:17.300615 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 01:21:17.300648 kernel: SELinux: policy capability always_check_network=0 Mar 14 01:21:17.300682 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 01:21:17.300710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 01:21:17.300730 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 01:21:17.300754 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 01:21:17.300773 kernel: audit: type=1403 audit(1773451276.032:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 01:21:17.300814 systemd[1]: Successfully loaded SELinux policy in 58.843ms. Mar 14 01:21:17.300859 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.265ms. Mar 14 01:21:17.300889 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 01:21:17.300927 systemd[1]: Detected virtualization kvm. Mar 14 01:21:17.300950 systemd[1]: Detected architecture x86-64. Mar 14 01:21:17.300971 systemd[1]: Detected first boot. Mar 14 01:21:17.300998 systemd[1]: Hostname set to . Mar 14 01:21:17.301025 systemd[1]: Initializing machine ID from VM UUID. Mar 14 01:21:17.301052 zram_generator::config[1052]: No configuration found. Mar 14 01:21:17.301075 systemd[1]: Populated /etc with preset unit settings. Mar 14 01:21:17.301095 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 01:21:17.301131 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 01:21:17.301154 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 01:21:17.301184 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 01:21:17.301212 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 01:21:17.301245 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 01:21:17.301271 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 01:21:17.301301 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 01:21:17.301323 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 01:21:17.301366 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 01:21:17.301390 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 01:21:17.301424 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 01:21:17.301448 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 01:21:17.301477 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 01:21:17.301518 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 01:21:17.301541 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 01:21:17.301563 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 01:21:17.301584 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 01:21:17.301625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 01:21:17.301655 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 01:21:17.301683 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 01:21:17.301710 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 01:21:17.301738 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 01:21:17.301760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 01:21:17.301795 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 01:21:17.301818 systemd[1]: Reached target slices.target - Slice Units. Mar 14 01:21:17.301846 systemd[1]: Reached target swap.target - Swaps. Mar 14 01:21:17.301881 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 01:21:17.301917 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 01:21:17.301952 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 01:21:17.301990 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 01:21:17.302013 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 01:21:17.302035 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 01:21:17.302063 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 01:21:17.302085 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 01:21:17.302112 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 01:21:17.302140 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 01:21:17.302168 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 01:21:17.302190 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 01:21:17.302222 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 01:21:17.302246 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 01:21:17.302267 systemd[1]: Reached target machines.target - Containers. Mar 14 01:21:17.302295 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 01:21:17.302317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 01:21:17.302338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 01:21:17.302371 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 01:21:17.302393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 01:21:17.302440 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 01:21:17.302464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 01:21:17.302486 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 01:21:17.302506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 01:21:17.302539 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 01:21:17.302563 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 01:21:17.302584 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 01:21:17.302606 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 01:21:17.302636 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 01:21:17.302674 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 01:21:17.302697 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 01:21:17.302724 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 01:21:17.302747 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 01:21:17.302768 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 01:21:17.302794 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 01:21:17.302816 systemd[1]: Stopped verity-setup.service. Mar 14 01:21:17.302838 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 01:21:17.302865 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 01:21:17.302924 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 01:21:17.302956 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 01:21:17.302985 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 01:21:17.303006 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 01:21:17.303040 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 01:21:17.303070 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 01:21:17.303098 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 01:21:17.303121 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 01:21:17.303142 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 01:21:17.303164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 01:21:17.303219 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 01:21:17.303248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 01:21:17.303271 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 01:21:17.303310 kernel: fuse: init (API version 7.39) Mar 14 01:21:17.303341 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 01:21:17.303399 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 01:21:17.303435 kernel: loop: module loaded Mar 14 01:21:17.303457 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 01:21:17.303492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 01:21:17.303544 systemd-journald[1148]: Collecting audit messages is disabled. Mar 14 01:21:17.303609 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 01:21:17.303633 systemd-journald[1148]: Journal started Mar 14 01:21:17.303671 systemd-journald[1148]: Runtime Journal (/run/log/journal/29987c58ad844c5b940e5899a24877d2) is 4.7M, max 38.0M, 33.2M free. Mar 14 01:21:16.867645 systemd[1]: Queued start job for default target multi-user.target. Mar 14 01:21:16.887672 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 01:21:16.888391 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 01:21:17.310704 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 01:21:17.310754 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 01:21:17.312510 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 01:21:17.338579 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 01:21:17.351726 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 01:21:17.360562 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 01:21:17.361408 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 01:21:17.361491 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 01:21:17.365884 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 01:21:17.375639 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 01:21:17.381589 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 01:21:17.382568 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 01:21:17.389629 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 01:21:17.394060 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 01:21:17.395561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 01:21:17.407627 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 01:21:17.408601 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 01:21:17.415437 kernel: ACPI: bus type drm_connector registered Mar 14 01:21:17.417579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 01:21:17.421936 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 01:21:17.431718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 01:21:17.437019 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 01:21:17.438576 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 01:21:17.441728 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 01:21:17.442702 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 01:21:17.444204 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 01:21:17.494764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 01:21:17.496818 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 01:21:17.504688 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 01:21:17.507566 systemd-journald[1148]: Time spent on flushing to /var/log/journal/29987c58ad844c5b940e5899a24877d2 is 34.795ms for 1146 entries. Mar 14 01:21:17.507566 systemd-journald[1148]: System Journal (/var/log/journal/29987c58ad844c5b940e5899a24877d2) is 8.0M, max 584.8M, 576.8M free. Mar 14 01:21:17.572992 systemd-journald[1148]: Received client request to flush runtime journal. Mar 14 01:21:17.573065 kernel: loop0: detected capacity change from 0 to 228704 Mar 14 01:21:17.579926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 01:21:17.588984 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 01:21:17.592531 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 01:21:17.604704 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 14 01:21:17.604732 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 14 01:21:17.610513 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 01:21:17.623606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 01:21:17.649850 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 01:21:17.659700 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 01:21:17.667442 kernel: loop1: detected capacity change from 0 to 142488 Mar 14 01:21:17.698882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 01:21:17.708666 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 01:21:17.750778 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 14 01:21:17.772593 kernel: loop2: detected capacity change from 0 to 140768 Mar 14 01:21:17.824178 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 01:21:17.830442 kernel: loop3: detected capacity change from 0 to 8 Mar 14 01:21:17.834658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 01:21:17.856449 kernel: loop4: detected capacity change from 0 to 228704 Mar 14 01:21:17.879448 kernel: loop5: detected capacity change from 0 to 142488 Mar 14 01:21:17.879715 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Mar 14 01:21:17.880132 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Mar 14 01:21:17.894663 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 01:21:17.901487 kernel: loop6: detected capacity change from 0 to 140768 Mar 14 01:21:17.927460 kernel: loop7: detected capacity change from 0 to 8 Mar 14 01:21:17.931577 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 14 01:21:17.932789 (sd-merge)[1213]: Merged extensions into '/usr'. Mar 14 01:21:17.945669 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 01:21:17.945863 systemd[1]: Reloading... Mar 14 01:21:18.109460 zram_generator::config[1240]: No configuration found. Mar 14 01:21:18.214048 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 01:21:18.373191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 01:21:18.443785 systemd[1]: Reloading finished in 495 ms. Mar 14 01:21:18.499624 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 01:21:18.503786 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 01:21:18.505185 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 01:21:18.518696 systemd[1]: Starting ensure-sysext.service... Mar 14 01:21:18.523634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 01:21:18.534702 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 01:21:18.541634 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Mar 14 01:21:18.541818 systemd[1]: Reloading... Mar 14 01:21:18.568594 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 01:21:18.569218 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 01:21:18.570770 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 01:21:18.571180 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Mar 14 01:21:18.571301 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Mar 14 01:21:18.576538 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 01:21:18.576556 systemd-tmpfiles[1298]: Skipping /boot Mar 14 01:21:18.599580 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 01:21:18.599600 systemd-tmpfiles[1298]: Skipping /boot Mar 14 01:21:18.617082 systemd-udevd[1299]: Using default interface naming scheme 'v255'. Mar 14 01:21:18.696743 zram_generator::config[1325]: No configuration found. Mar 14 01:21:18.872462 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1337) Mar 14 01:21:18.964462 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 01:21:18.972434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 14 01:21:18.981439 kernel: ACPI: button: Power Button [PWRF] Mar 14 01:21:19.000111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 01:21:19.069512 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 01:21:19.079532 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 14 01:21:19.079612 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 01:21:19.080644 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 01:21:19.135080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 01:21:19.138190 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 01:21:19.138819 systemd[1]: Reloading finished in 596 ms. Mar 14 01:21:19.167629 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 01:21:19.171134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 01:21:19.254105 systemd[1]: Finished ensure-sysext.service. Mar 14 01:21:19.260159 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 01:21:19.277675 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 01:21:19.282678 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 01:21:19.283673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 01:21:19.289653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 01:21:19.299613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 01:21:19.304632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 01:21:19.308389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 01:21:19.310328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 01:21:19.312638 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 01:21:19.316631 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 01:21:19.323563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 01:21:19.338638 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 01:21:19.355653 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 01:21:19.365670 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 01:21:19.371661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 01:21:19.373514 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 01:21:19.375682 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 01:21:19.375954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 01:21:19.378377 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 01:21:19.379706 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 01:21:19.403649 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 01:21:19.406088 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 01:21:19.407500 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 01:21:19.409470 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 01:21:19.412996 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 01:21:19.413080 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 01:21:19.422735 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 01:21:19.423326 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 01:21:19.428194 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 01:21:19.442616 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 01:21:19.485508 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 01:21:19.487748 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 01:21:19.524754 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 01:21:19.538699 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 01:21:19.581905 augenrules[1455]: No rules Mar 14 01:21:19.586617 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 01:21:19.596370 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 01:21:19.667984 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 01:21:19.678681 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 01:21:19.701476 systemd-networkd[1419]: lo: Link UP Mar 14 01:21:19.701490 systemd-networkd[1419]: lo: Gained carrier Mar 14 01:21:19.704950 systemd-networkd[1419]: Enumeration completed Mar 14 01:21:19.705176 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 01:21:19.705875 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 01:21:19.705888 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 01:21:19.707774 systemd-networkd[1419]: eth0: Link UP Mar 14 01:21:19.707788 systemd-networkd[1419]: eth0: Gained carrier Mar 14 01:21:19.707814 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 01:21:19.728212 systemd-resolved[1421]: Positive Trust Anchors: Mar 14 01:21:19.728244 systemd-resolved[1421]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 01:21:19.728317 systemd-resolved[1421]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 01:21:19.729526 systemd-networkd[1419]: eth0: DHCPv4 address 10.230.8.14/30, gateway 10.230.8.13 acquired from 10.230.8.13 Mar 14 01:21:19.738352 systemd-resolved[1421]: Using system hostname 'srv-ouubu.gb1.brightbox.com'. Mar 14 01:21:19.793595 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 01:21:19.798837 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 01:21:19.801399 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 01:21:19.803339 systemd[1]: Reached target network.target - Network. Mar 14 01:21:19.804619 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 01:21:19.805550 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 01:21:19.813692 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 01:21:19.816655 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 01:21:19.848785 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 01:21:19.850760 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 01:21:19.851754 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 01:21:19.852906 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 01:21:19.854071 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 01:21:19.855527 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 01:21:19.856607 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 01:21:19.857569 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 01:21:19.858556 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 01:21:19.858767 systemd[1]: Reached target paths.target - Path Units. Mar 14 01:21:19.859675 systemd[1]: Reached target timers.target - Timer Units. Mar 14 01:21:19.863574 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 01:21:19.873083 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 01:21:19.880295 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 01:21:19.883076 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 01:21:19.884732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 01:21:19.885629 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 01:21:19.886325 systemd[1]: Reached target basic.target - Basic System. Mar 14 01:21:19.887044 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 01:21:19.887098 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 01:21:19.895629 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 01:21:19.903680 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 01:21:19.905441 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 01:21:19.908112 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 01:21:19.914397 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 01:21:19.925671 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 01:21:19.927063 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 01:21:19.931382 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 01:21:19.940970 jq[1477]: false Mar 14 01:21:19.941963 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 01:21:19.949745 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 01:21:19.955243 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 01:21:19.964654 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 01:21:19.968559 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 01:21:19.969224 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 01:21:19.978657 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 01:21:19.985610 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 01:21:19.989493 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 01:21:19.999251 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 01:21:20.000830 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 01:21:20.492229 systemd-timesyncd[1424]: Contacted time server 212.71.233.40:123 (0.flatcar.pool.ntp.org). Mar 14 01:21:20.492354 systemd-timesyncd[1424]: Initial clock synchronization to Sat 2026-03-14 01:21:20.491593 UTC. Mar 14 01:21:20.492776 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 01:21:20.493635 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 01:21:20.498365 systemd-resolved[1421]: Clock change detected. Flushing caches. Mar 14 01:21:20.511611 update_engine[1485]: I20260314 01:21:20.510425 1485 main.cc:92] Flatcar Update Engine starting Mar 14 01:21:20.554591 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 01:21:20.555509 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 01:21:20.558628 jq[1488]: true Mar 14 01:21:20.569317 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 01:21:20.575384 jq[1508]: true Mar 14 01:21:20.588643 tar[1491]: linux-amd64/LICENSE Mar 14 01:21:20.592931 tar[1491]: linux-amd64/helm Mar 14 01:21:20.597115 dbus-daemon[1476]: [system] SELinux support is enabled Mar 14 01:21:20.599122 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 01:21:20.605241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 01:21:20.605280 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 01:21:20.607136 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 01:21:20.607163 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 01:21:20.618472 extend-filesystems[1478]: Found loop4 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found loop5 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found loop6 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found loop7 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda1 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda2 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda3 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found usr Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda4 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda6 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda7 Mar 14 01:21:20.618472 extend-filesystems[1478]: Found vda9 Mar 14 01:21:20.618472 extend-filesystems[1478]: Checking size of /dev/vda9 Mar 14 01:21:20.727670 extend-filesystems[1478]: Resized partition /dev/vda9 Mar 14 01:21:20.735120 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 14 01:21:20.634634 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1419 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 01:21:20.654263 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 01:21:20.735602 update_engine[1485]: I20260314 01:21:20.644495 1485 update_check_scheduler.cc:74] Next update check in 9m39s Mar 14 01:21:20.735742 extend-filesystems[1529]: resize2fs 1.47.1 (20-May-2024) Mar 14 01:21:20.674886 systemd[1]: Started update-engine.service - Update Engine. Mar 14 01:21:20.689774 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 01:21:20.697294 systemd-logind[1484]: Watching system buttons on /dev/input/event2 (Power Button) Mar 14 01:21:20.697331 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 01:21:20.701893 systemd-logind[1484]: New seat seat0. Mar 14 01:21:20.713091 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 01:21:20.802268 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1343) Mar 14 01:21:20.915412 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Mar 14 01:21:20.922078 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 01:21:20.939927 systemd[1]: Starting sshkeys.service... Mar 14 01:21:20.967384 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 01:21:20.975982 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 01:21:21.098685 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 01:21:21.116579 containerd[1502]: time="2026-03-14T01:21:21.115563243Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 01:21:21.138188 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 14 01:21:21.165194 extend-filesystems[1529]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 01:21:21.165194 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 14 01:21:21.165194 extend-filesystems[1529]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 14 01:21:21.172889 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Mar 14 01:21:21.174129 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 01:21:21.174486 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 01:21:21.188887 containerd[1502]: time="2026-03-14T01:21:21.188769912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 01:21:21.196851 containerd[1502]: time="2026-03-14T01:21:21.196788090Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 01:21:21.196851 containerd[1502]: time="2026-03-14T01:21:21.196846426Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 01:21:21.196998 containerd[1502]: time="2026-03-14T01:21:21.196886476Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 01:21:21.198569 containerd[1502]: time="2026-03-14T01:21:21.197214025Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 01:21:21.198569 containerd[1502]: time="2026-03-14T01:21:21.197262457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 01:21:21.198569 containerd[1502]: time="2026-03-14T01:21:21.197428420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 01:21:21.198569 containerd[1502]: time="2026-03-14T01:21:21.197452554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 01:21:21.199784 containerd[1502]: time="2026-03-14T01:21:21.199744410Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 01:21:21.199784 containerd[1502]: time="2026-03-14T01:21:21.199781847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 01:21:21.199898 containerd[1502]: time="2026-03-14T01:21:21.199813185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 01:21:21.199898 containerd[1502]: time="2026-03-14T01:21:21.199832134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 01:21:21.200002 containerd[1502]: time="2026-03-14T01:21:21.199975358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 01:21:21.203202 containerd[1502]: time="2026-03-14T01:21:21.203166167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 01:21:21.203689 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 01:21:21.203925 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 01:21:21.205089 containerd[1502]: time="2026-03-14T01:21:21.205053476Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 01:21:21.205150 containerd[1502]: time="2026-03-14T01:21:21.205089031Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 01:21:21.205315 containerd[1502]: time="2026-03-14T01:21:21.205272120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 01:21:21.205429 containerd[1502]: time="2026-03-14T01:21:21.205396934Z" level=info msg="metadata content store policy set" policy=shared Mar 14 01:21:21.206214 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1519 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 01:21:21.221426 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 01:21:21.222465 containerd[1502]: time="2026-03-14T01:21:21.222401010Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 01:21:21.222917 containerd[1502]: time="2026-03-14T01:21:21.222883569Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 01:21:21.226686 containerd[1502]: time="2026-03-14T01:21:21.226592162Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 01:21:21.226686 containerd[1502]: time="2026-03-14T01:21:21.226657248Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 01:21:21.226795 containerd[1502]: time="2026-03-14T01:21:21.226695038Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 01:21:21.227433 containerd[1502]: time="2026-03-14T01:21:21.227049961Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 01:21:21.229739 containerd[1502]: time="2026-03-14T01:21:21.229704727Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 01:21:21.229969 containerd[1502]: time="2026-03-14T01:21:21.229937909Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 01:21:21.230036 containerd[1502]: time="2026-03-14T01:21:21.229972103Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 01:21:21.230036 containerd[1502]: time="2026-03-14T01:21:21.230013158Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 01:21:21.230151 containerd[1502]: time="2026-03-14T01:21:21.230037723Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230151 containerd[1502]: time="2026-03-14T01:21:21.230071949Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230151 containerd[1502]: time="2026-03-14T01:21:21.230123890Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230265 containerd[1502]: time="2026-03-14T01:21:21.230154988Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230265 containerd[1502]: time="2026-03-14T01:21:21.230184805Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230265 containerd[1502]: time="2026-03-14T01:21:21.230211863Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230265 containerd[1502]: time="2026-03-14T01:21:21.230238073Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230472 containerd[1502]: time="2026-03-14T01:21:21.230262547Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 01:21:21.230472 containerd[1502]: time="2026-03-14T01:21:21.230337153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230472 containerd[1502]: time="2026-03-14T01:21:21.230363531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230472 containerd[1502]: time="2026-03-14T01:21:21.230405322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230472 containerd[1502]: time="2026-03-14T01:21:21.230430158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230472 containerd[1502]: time="2026-03-14T01:21:21.230457821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230711 containerd[1502]: time="2026-03-14T01:21:21.230488752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230711 containerd[1502]: time="2026-03-14T01:21:21.230517324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230711 containerd[1502]: time="2026-03-14T01:21:21.230544005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230711 containerd[1502]: time="2026-03-14T01:21:21.230604647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230711 containerd[1502]: time="2026-03-14T01:21:21.230637138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230711 containerd[1502]: time="2026-03-14T01:21:21.230681372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.230711 containerd[1502]: time="2026-03-14T01:21:21.230709207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.231022 containerd[1502]: time="2026-03-14T01:21:21.230759447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.231592895Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.231702601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.231730700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.231780719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232605815Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232743568Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232767345Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232788533Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232806351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232848542Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232883355Z" level=info msg="NRI interface is disabled by configuration." Mar 14 01:21:21.232974 containerd[1502]: time="2026-03-14T01:21:21.232906938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 01:21:21.236297 containerd[1502]: time="2026-03-14T01:21:21.235112206Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 01:21:21.236297 containerd[1502]: time="2026-03-14T01:21:21.235239856Z" level=info msg="Connect containerd service" Mar 14 01:21:21.236297 containerd[1502]: time="2026-03-14T01:21:21.235301604Z" level=info msg="using legacy CRI server" Mar 14 01:21:21.236297 containerd[1502]: time="2026-03-14T01:21:21.235319296Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 01:21:21.236297 containerd[1502]: time="2026-03-14T01:21:21.235677393Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 01:21:21.237770 containerd[1502]: time="2026-03-14T01:21:21.237713914Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 01:21:21.238783 containerd[1502]: time="2026-03-14T01:21:21.238699950Z" level=info msg="Start subscribing containerd event" Mar 14 01:21:21.238841 containerd[1502]: time="2026-03-14T01:21:21.238805965Z" level=info msg="Start recovering state" Mar 14 01:21:21.239373 containerd[1502]: time="2026-03-14T01:21:21.238935990Z" level=info msg="Start event monitor" Mar 14 01:21:21.239373 containerd[1502]: time="2026-03-14T01:21:21.238977238Z" level=info msg="Start snapshots syncer" Mar 14 01:21:21.239373 containerd[1502]: time="2026-03-14T01:21:21.239000323Z" level=info msg="Start cni network conf syncer for default" Mar 14 01:21:21.239373 containerd[1502]: time="2026-03-14T01:21:21.239015640Z" level=info msg="Start streaming server" Mar 14 01:21:21.245350 containerd[1502]: time="2026-03-14T01:21:21.243265072Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 01:21:21.245350 containerd[1502]: time="2026-03-14T01:21:21.243405016Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 01:21:21.249660 containerd[1502]: time="2026-03-14T01:21:21.249628823Z" level=info msg="containerd successfully booted in 0.136131s" Mar 14 01:21:21.249751 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 01:21:21.278988 polkitd[1553]: Started polkitd version 121 Mar 14 01:21:21.294189 polkitd[1553]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 01:21:21.294284 polkitd[1553]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 01:21:21.298316 polkitd[1553]: Finished loading, compiling and executing 2 rules Mar 14 01:21:21.299198 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 01:21:21.299449 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 01:21:21.302974 polkitd[1553]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 01:21:21.327211 systemd-hostnamed[1519]: Hostname set to (static) Mar 14 01:21:21.475079 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 01:21:21.520767 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 01:21:21.533705 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 01:21:21.543488 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 01:21:21.543791 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 01:21:21.555558 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 01:21:21.571039 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 01:21:21.584190 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 01:21:21.594456 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 01:21:21.595790 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 01:21:21.715305 tar[1491]: linux-amd64/README.md Mar 14 01:21:21.732017 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 01:21:22.128197 systemd-networkd[1419]: eth0: Gained IPv6LL Mar 14 01:21:22.133378 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 01:21:22.137353 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 01:21:22.144915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:21:22.150835 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 01:21:22.199705 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 01:21:22.216946 systemd-networkd[1419]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8203:24:19ff:fee6:80e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8203:24:19ff:fee6:80e/64 assigned by NDisc. Mar 14 01:21:22.216958 systemd-networkd[1419]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 14 01:21:23.236982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:21:23.253394 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 01:21:23.929447 kubelet[1599]: E0314 01:21:23.929259 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 01:21:23.932852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 01:21:23.933140 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 01:21:23.934002 systemd[1]: kubelet.service: Consumed 1.153s CPU time. Mar 14 01:21:24.576896 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 01:21:24.584119 systemd[1]: Started sshd@0-10.230.8.14:22-20.161.92.111:57428.service - OpenSSH per-connection server daemon (20.161.92.111:57428). Mar 14 01:21:25.161040 sshd[1610]: Accepted publickey for core from 20.161.92.111 port 57428 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:25.164458 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:25.182638 systemd-logind[1484]: New session 1 of user core. Mar 14 01:21:25.185481 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 01:21:25.198127 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 01:21:25.223656 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 01:21:25.233003 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 01:21:25.250511 (systemd)[1614]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 01:21:25.399697 systemd[1614]: Queued start job for default target default.target. Mar 14 01:21:25.410839 systemd[1614]: Created slice app.slice - User Application Slice. Mar 14 01:21:25.410996 systemd[1614]: Reached target paths.target - Paths. Mar 14 01:21:25.411026 systemd[1614]: Reached target timers.target - Timers. Mar 14 01:21:25.413383 systemd[1614]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 01:21:25.430606 systemd[1614]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 01:21:25.431471 systemd[1614]: Reached target sockets.target - Sockets. Mar 14 01:21:25.431498 systemd[1614]: Reached target basic.target - Basic System. Mar 14 01:21:25.431843 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 01:21:25.431994 systemd[1614]: Reached target default.target - Main User Target. Mar 14 01:21:25.432176 systemd[1614]: Startup finished in 171ms. Mar 14 01:21:25.452355 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 01:21:25.863974 systemd[1]: Started sshd@1-10.230.8.14:22-20.161.92.111:57440.service - OpenSSH per-connection server daemon (20.161.92.111:57440). Mar 14 01:21:26.422544 sshd[1625]: Accepted publickey for core from 20.161.92.111 port 57440 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:26.425212 sshd[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:26.432458 systemd-logind[1484]: New session 2 of user core. Mar 14 01:21:26.444897 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 01:21:26.653408 login[1576]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 14 01:21:26.656795 login[1577]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 14 01:21:26.664786 systemd-logind[1484]: New session 4 of user core. Mar 14 01:21:26.675855 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 01:21:26.682281 systemd-logind[1484]: New session 3 of user core. Mar 14 01:21:26.686061 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 01:21:26.827685 sshd[1625]: pam_unix(sshd:session): session closed for user core Mar 14 01:21:26.834243 systemd[1]: sshd@1-10.230.8.14:22-20.161.92.111:57440.service: Deactivated successfully. Mar 14 01:21:26.837747 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 01:21:26.839004 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Mar 14 01:21:26.840736 systemd-logind[1484]: Removed session 2. Mar 14 01:21:26.947056 systemd[1]: Started sshd@2-10.230.8.14:22-20.161.92.111:57450.service - OpenSSH per-connection server daemon (20.161.92.111:57450). Mar 14 01:21:27.523403 sshd[1658]: Accepted publickey for core from 20.161.92.111 port 57450 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:27.524511 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:27.532368 systemd-logind[1484]: New session 5 of user core. Mar 14 01:21:27.539815 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 01:21:27.672573 coreos-metadata[1475]: Mar 14 01:21:27.670 WARN failed to locate config-drive, using the metadata service API instead Mar 14 01:21:27.697645 coreos-metadata[1475]: Mar 14 01:21:27.697 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 14 01:21:27.704579 coreos-metadata[1475]: Mar 14 01:21:27.704 INFO Fetch failed with 404: resource not found Mar 14 01:21:27.704579 coreos-metadata[1475]: Mar 14 01:21:27.704 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 14 01:21:27.705514 coreos-metadata[1475]: Mar 14 01:21:27.705 INFO Fetch successful Mar 14 01:21:27.705774 coreos-metadata[1475]: Mar 14 01:21:27.705 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 14 01:21:27.715954 coreos-metadata[1475]: Mar 14 01:21:27.715 INFO Fetch successful Mar 14 01:21:27.716108 coreos-metadata[1475]: Mar 14 01:21:27.716 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 14 01:21:27.729890 coreos-metadata[1475]: Mar 14 01:21:27.729 INFO Fetch successful Mar 14 01:21:27.730065 coreos-metadata[1475]: Mar 14 01:21:27.730 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 14 01:21:27.745661 coreos-metadata[1475]: Mar 14 01:21:27.745 INFO Fetch successful Mar 14 01:21:27.745868 coreos-metadata[1475]: Mar 14 01:21:27.745 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 14 01:21:27.761286 coreos-metadata[1475]: Mar 14 01:21:27.761 INFO Fetch successful Mar 14 01:21:27.803780 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 01:21:27.805535 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 01:21:27.932943 sshd[1658]: pam_unix(sshd:session): session closed for user core Mar 14 01:21:27.938374 systemd[1]: sshd@2-10.230.8.14:22-20.161.92.111:57450.service: Deactivated successfully. Mar 14 01:21:27.941212 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 01:21:27.942387 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Mar 14 01:21:27.943845 systemd-logind[1484]: Removed session 5. Mar 14 01:21:28.134149 coreos-metadata[1545]: Mar 14 01:21:28.133 WARN failed to locate config-drive, using the metadata service API instead Mar 14 01:21:28.157243 coreos-metadata[1545]: Mar 14 01:21:28.157 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 14 01:21:28.184087 coreos-metadata[1545]: Mar 14 01:21:28.184 INFO Fetch successful Mar 14 01:21:28.184087 coreos-metadata[1545]: Mar 14 01:21:28.184 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 01:21:28.214410 coreos-metadata[1545]: Mar 14 01:21:28.214 INFO Fetch successful Mar 14 01:21:28.216638 unknown[1545]: wrote ssh authorized keys file for user: core Mar 14 01:21:28.255340 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Mar 14 01:21:28.256822 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 01:21:28.258914 systemd[1]: Finished sshkeys.service. Mar 14 01:21:28.263107 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 01:21:28.263448 systemd[1]: Startup finished in 1.483s (kernel) + 17.273s (initrd) + 11.798s (userspace) = 30.555s. Mar 14 01:21:34.172905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 01:21:34.180905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:21:34.353661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:21:34.366104 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 01:21:34.471095 kubelet[1685]: E0314 01:21:34.470883 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 01:21:34.475425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 01:21:34.475723 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 01:21:38.033967 systemd[1]: Started sshd@3-10.230.8.14:22-20.161.92.111:58542.service - OpenSSH per-connection server daemon (20.161.92.111:58542). Mar 14 01:21:38.599591 sshd[1693]: Accepted publickey for core from 20.161.92.111 port 58542 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:38.601158 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:38.607438 systemd-logind[1484]: New session 6 of user core. Mar 14 01:21:38.617894 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 01:21:38.995705 sshd[1693]: pam_unix(sshd:session): session closed for user core Mar 14 01:21:38.999723 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Mar 14 01:21:39.000207 systemd[1]: sshd@3-10.230.8.14:22-20.161.92.111:58542.service: Deactivated successfully. Mar 14 01:21:39.002465 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 01:21:39.004633 systemd-logind[1484]: Removed session 6. Mar 14 01:21:39.110212 systemd[1]: Started sshd@4-10.230.8.14:22-20.161.92.111:58558.service - OpenSSH per-connection server daemon (20.161.92.111:58558). Mar 14 01:21:39.679616 sshd[1700]: Accepted publickey for core from 20.161.92.111 port 58558 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:39.681420 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:39.689234 systemd-logind[1484]: New session 7 of user core. Mar 14 01:21:39.701788 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 01:21:40.080816 sshd[1700]: pam_unix(sshd:session): session closed for user core Mar 14 01:21:40.086656 systemd[1]: sshd@4-10.230.8.14:22-20.161.92.111:58558.service: Deactivated successfully. Mar 14 01:21:40.088924 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 01:21:40.090020 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Mar 14 01:21:40.091469 systemd-logind[1484]: Removed session 7. Mar 14 01:21:40.178471 systemd[1]: Started sshd@5-10.230.8.14:22-20.161.92.111:51498.service - OpenSSH per-connection server daemon (20.161.92.111:51498). Mar 14 01:21:40.747065 sshd[1707]: Accepted publickey for core from 20.161.92.111 port 51498 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:40.749210 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:40.760022 systemd-logind[1484]: New session 8 of user core. Mar 14 01:21:40.771994 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 01:21:41.149483 sshd[1707]: pam_unix(sshd:session): session closed for user core Mar 14 01:21:41.155493 systemd[1]: sshd@5-10.230.8.14:22-20.161.92.111:51498.service: Deactivated successfully. Mar 14 01:21:41.157805 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 01:21:41.158719 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Mar 14 01:21:41.160349 systemd-logind[1484]: Removed session 8. Mar 14 01:21:41.245458 systemd[1]: Started sshd@6-10.230.8.14:22-20.161.92.111:51506.service - OpenSSH per-connection server daemon (20.161.92.111:51506). Mar 14 01:21:41.806600 sshd[1714]: Accepted publickey for core from 20.161.92.111 port 51506 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:41.808069 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:41.815783 systemd-logind[1484]: New session 9 of user core. Mar 14 01:21:41.826055 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 01:21:42.127994 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 01:21:42.128472 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 01:21:42.151537 sudo[1717]: pam_unix(sudo:session): session closed for user root Mar 14 01:21:42.239859 sshd[1714]: pam_unix(sshd:session): session closed for user core Mar 14 01:21:42.244716 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Mar 14 01:21:42.245433 systemd[1]: sshd@6-10.230.8.14:22-20.161.92.111:51506.service: Deactivated successfully. Mar 14 01:21:42.247877 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 01:21:42.250281 systemd-logind[1484]: Removed session 9. Mar 14 01:21:42.337782 systemd[1]: Started sshd@7-10.230.8.14:22-20.161.92.111:51510.service - OpenSSH per-connection server daemon (20.161.92.111:51510). Mar 14 01:21:42.905003 sshd[1722]: Accepted publickey for core from 20.161.92.111 port 51510 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:42.907184 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:42.913734 systemd-logind[1484]: New session 10 of user core. Mar 14 01:21:42.930159 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 01:21:43.215516 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 01:21:43.216680 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 01:21:43.222458 sudo[1726]: pam_unix(sudo:session): session closed for user root Mar 14 01:21:43.230800 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 01:21:43.231251 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 01:21:43.249947 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 01:21:43.254584 auditctl[1729]: No rules Mar 14 01:21:43.255375 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 01:21:43.255749 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 01:21:43.265027 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 01:21:43.300031 augenrules[1747]: No rules Mar 14 01:21:43.302208 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 01:21:43.304364 sudo[1725]: pam_unix(sudo:session): session closed for user root Mar 14 01:21:43.392863 sshd[1722]: pam_unix(sshd:session): session closed for user core Mar 14 01:21:43.397123 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Mar 14 01:21:43.397798 systemd[1]: sshd@7-10.230.8.14:22-20.161.92.111:51510.service: Deactivated successfully. Mar 14 01:21:43.400055 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 01:21:43.402259 systemd-logind[1484]: Removed session 10. Mar 14 01:21:43.502999 systemd[1]: Started sshd@8-10.230.8.14:22-20.161.92.111:51520.service - OpenSSH per-connection server daemon (20.161.92.111:51520). Mar 14 01:21:44.050102 sshd[1755]: Accepted publickey for core from 20.161.92.111 port 51520 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:21:44.052934 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:21:44.059650 systemd-logind[1484]: New session 11 of user core. Mar 14 01:21:44.066826 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 01:21:44.360258 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 01:21:44.360803 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 01:21:44.673898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 01:21:44.693915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:21:44.938891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:21:44.948349 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 01:21:44.963004 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 01:21:44.966917 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 01:21:45.017610 kubelet[1782]: E0314 01:21:45.016969 1782 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 01:21:45.022314 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 01:21:45.022608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 01:21:45.427098 dockerd[1787]: time="2026-03-14T01:21:45.426931232Z" level=info msg="Starting up" Mar 14 01:21:45.580242 dockerd[1787]: time="2026-03-14T01:21:45.579956282Z" level=info msg="Loading containers: start." Mar 14 01:21:45.740711 kernel: Initializing XFRM netlink socket Mar 14 01:21:45.844285 systemd-networkd[1419]: docker0: Link UP Mar 14 01:21:45.863575 dockerd[1787]: time="2026-03-14T01:21:45.863458876Z" level=info msg="Loading containers: done." Mar 14 01:21:45.884007 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2331156776-merged.mount: Deactivated successfully. Mar 14 01:21:45.885242 dockerd[1787]: time="2026-03-14T01:21:45.885189775Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 01:21:45.885382 dockerd[1787]: time="2026-03-14T01:21:45.885352365Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 01:21:45.886356 dockerd[1787]: time="2026-03-14T01:21:45.886266524Z" level=info msg="Daemon has completed initialization" Mar 14 01:21:45.927194 dockerd[1787]: time="2026-03-14T01:21:45.926328206Z" level=info msg="API listen on /run/docker.sock" Mar 14 01:21:45.926695 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 01:21:46.631684 containerd[1502]: time="2026-03-14T01:21:46.631431157Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 01:21:47.440014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728375407.mount: Deactivated successfully. Mar 14 01:21:50.735371 containerd[1502]: time="2026-03-14T01:21:50.735269903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:50.736973 containerd[1502]: time="2026-03-14T01:21:50.736929872Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 14 01:21:50.737704 containerd[1502]: time="2026-03-14T01:21:50.737542823Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:50.744771 containerd[1502]: time="2026-03-14T01:21:50.744704689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:50.746879 containerd[1502]: time="2026-03-14T01:21:50.746331638Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 4.114794941s" Mar 14 01:21:50.746879 containerd[1502]: time="2026-03-14T01:21:50.746406023Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 14 01:21:50.747957 containerd[1502]: time="2026-03-14T01:21:50.747914826Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 01:21:52.230710 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 01:21:53.268057 containerd[1502]: time="2026-03-14T01:21:53.267947823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:53.276402 containerd[1502]: time="2026-03-14T01:21:53.275780838Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 14 01:21:53.279508 containerd[1502]: time="2026-03-14T01:21:53.279445070Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:53.283587 containerd[1502]: time="2026-03-14T01:21:53.283352633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:53.286026 containerd[1502]: time="2026-03-14T01:21:53.285122198Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.537157411s" Mar 14 01:21:53.286026 containerd[1502]: time="2026-03-14T01:21:53.285171508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 14 01:21:53.286026 containerd[1502]: time="2026-03-14T01:21:53.285796944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 01:21:55.129434 containerd[1502]: time="2026-03-14T01:21:55.129140631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:55.130776 containerd[1502]: time="2026-03-14T01:21:55.130703506Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 14 01:21:55.131579 containerd[1502]: time="2026-03-14T01:21:55.131427908Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:55.135883 containerd[1502]: time="2026-03-14T01:21:55.135803954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:55.137813 containerd[1502]: time="2026-03-14T01:21:55.137500873Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.851658556s" Mar 14 01:21:55.137813 containerd[1502]: time="2026-03-14T01:21:55.137581833Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 14 01:21:55.138783 containerd[1502]: time="2026-03-14T01:21:55.138719196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 01:21:55.172898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 01:21:55.181833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:21:55.356343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:21:55.368093 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 01:21:55.432110 kubelet[2009]: E0314 01:21:55.427604 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 01:21:55.429359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 01:21:55.429630 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 01:21:56.818387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1916165733.mount: Deactivated successfully. Mar 14 01:21:57.575199 containerd[1502]: time="2026-03-14T01:21:57.573932115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:57.575199 containerd[1502]: time="2026-03-14T01:21:57.575126345Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 14 01:21:57.576202 containerd[1502]: time="2026-03-14T01:21:57.576120837Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:57.578778 containerd[1502]: time="2026-03-14T01:21:57.578718682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:57.580198 containerd[1502]: time="2026-03-14T01:21:57.580155425Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.441380298s" Mar 14 01:21:57.580333 containerd[1502]: time="2026-03-14T01:21:57.580307558Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 14 01:21:57.581509 containerd[1502]: time="2026-03-14T01:21:57.581468643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 01:21:58.197542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534272840.mount: Deactivated successfully. Mar 14 01:21:59.930504 containerd[1502]: time="2026-03-14T01:21:59.930168455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:59.931837 containerd[1502]: time="2026-03-14T01:21:59.931782957Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 14 01:21:59.933406 containerd[1502]: time="2026-03-14T01:21:59.932621269Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:59.937590 containerd[1502]: time="2026-03-14T01:21:59.936938281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:21:59.938946 containerd[1502]: time="2026-03-14T01:21:59.938733362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.357218292s" Mar 14 01:21:59.938946 containerd[1502]: time="2026-03-14T01:21:59.938799260Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 14 01:21:59.939854 containerd[1502]: time="2026-03-14T01:21:59.939803204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 01:22:00.838553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515937825.mount: Deactivated successfully. Mar 14 01:22:00.844788 containerd[1502]: time="2026-03-14T01:22:00.844444419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:00.845710 containerd[1502]: time="2026-03-14T01:22:00.845655777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 14 01:22:00.846585 containerd[1502]: time="2026-03-14T01:22:00.846193167Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:00.849328 containerd[1502]: time="2026-03-14T01:22:00.849259256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:00.850716 containerd[1502]: time="2026-03-14T01:22:00.850516412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 910.659181ms" Mar 14 01:22:00.850716 containerd[1502]: time="2026-03-14T01:22:00.850578532Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 14 01:22:00.851886 containerd[1502]: time="2026-03-14T01:22:00.851835671Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 01:22:01.456030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153034415.mount: Deactivated successfully. Mar 14 01:22:04.267476 containerd[1502]: time="2026-03-14T01:22:04.267365504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:04.269435 containerd[1502]: time="2026-03-14T01:22:04.269124840Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 14 01:22:04.270346 containerd[1502]: time="2026-03-14T01:22:04.270272202Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:04.278597 containerd[1502]: time="2026-03-14T01:22:04.276695147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:04.278755 containerd[1502]: time="2026-03-14T01:22:04.278568590Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.426581385s" Mar 14 01:22:04.278910 containerd[1502]: time="2026-03-14T01:22:04.278864566Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 14 01:22:05.672896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 01:22:05.682645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:22:05.721570 update_engine[1485]: I20260314 01:22:05.720733 1485 update_attempter.cc:509] Updating boot flags... Mar 14 01:22:05.999794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:22:06.032992 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2177) Mar 14 01:22:06.036055 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 01:22:06.162579 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2180) Mar 14 01:22:06.294635 kubelet[2181]: E0314 01:22:06.291210 2181 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 01:22:06.300505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 01:22:06.301086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 01:22:09.265136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:22:09.277038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:22:09.319403 systemd[1]: Reloading requested from client PID 2199 ('systemctl') (unit session-11.scope)... Mar 14 01:22:09.319448 systemd[1]: Reloading... Mar 14 01:22:09.526955 zram_generator::config[2238]: No configuration found. Mar 14 01:22:09.682225 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 01:22:09.799263 systemd[1]: Reloading finished in 479 ms. Mar 14 01:22:09.870381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:22:09.881367 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 01:22:09.882066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:22:09.882738 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 01:22:09.883076 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:22:09.895042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:22:10.052382 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:22:10.071240 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 01:22:10.174326 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 01:22:10.175577 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 01:22:10.175577 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 01:22:10.175577 kubelet[2308]: I0314 01:22:10.175012 2308 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 01:22:10.573478 kubelet[2308]: I0314 01:22:10.573429 2308 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 01:22:10.574598 kubelet[2308]: I0314 01:22:10.573698 2308 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 01:22:10.574598 kubelet[2308]: I0314 01:22:10.574017 2308 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 01:22:10.610581 kubelet[2308]: I0314 01:22:10.609446 2308 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 01:22:10.611746 kubelet[2308]: E0314 01:22:10.611707 2308 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 01:22:10.628772 kubelet[2308]: E0314 01:22:10.628720 2308 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 01:22:10.628772 kubelet[2308]: I0314 01:22:10.628771 2308 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 01:22:10.636590 kubelet[2308]: I0314 01:22:10.636396 2308 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 01:22:10.642021 kubelet[2308]: I0314 01:22:10.641939 2308 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 01:22:10.645126 kubelet[2308]: I0314 01:22:10.642015 2308 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ouubu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 01:22:10.645464 kubelet[2308]: I0314 01:22:10.645137 2308 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 01:22:10.645464 kubelet[2308]: I0314 01:22:10.645156 2308 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 01:22:10.645464 kubelet[2308]: I0314 01:22:10.645412 2308 state_mem.go:36] "Initialized new in-memory state store" Mar 14 01:22:10.651130 kubelet[2308]: I0314 01:22:10.651096 2308 kubelet.go:480] "Attempting to sync node with API server" Mar 14 01:22:10.651130 kubelet[2308]: I0314 01:22:10.651126 2308 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 01:22:10.651339 kubelet[2308]: I0314 01:22:10.651186 2308 kubelet.go:386] "Adding apiserver pod source" Mar 14 01:22:10.653536 kubelet[2308]: I0314 01:22:10.653387 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 01:22:10.660575 kubelet[2308]: E0314 01:22:10.659917 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ouubu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 01:22:10.660575 kubelet[2308]: E0314 01:22:10.660443 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 01:22:10.660754 kubelet[2308]: I0314 01:22:10.660727 2308 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 01:22:10.661610 kubelet[2308]: I0314 01:22:10.661587 2308 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 01:22:10.662608 kubelet[2308]: W0314 01:22:10.662584 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 01:22:10.671289 kubelet[2308]: I0314 01:22:10.671259 2308 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 01:22:10.671474 kubelet[2308]: I0314 01:22:10.671454 2308 server.go:1289] "Started kubelet" Mar 14 01:22:10.672852 kubelet[2308]: I0314 01:22:10.672369 2308 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 01:22:10.674370 kubelet[2308]: I0314 01:22:10.674295 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 01:22:10.675144 kubelet[2308]: I0314 01:22:10.675115 2308 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 01:22:10.675769 kubelet[2308]: I0314 01:22:10.675738 2308 server.go:317] "Adding debug handlers to kubelet server" Mar 14 01:22:10.678755 kubelet[2308]: I0314 01:22:10.678722 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 01:22:10.681599 kubelet[2308]: E0314 01:22:10.679637 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.8.14:6443/api/v1/namespaces/default/events\": dial tcp 10.230.8.14:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-ouubu.gb1.brightbox.com.189c9096cf1d09d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ouubu.gb1.brightbox.com,UID:srv-ouubu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ouubu.gb1.brightbox.com,},FirstTimestamp:2026-03-14 01:22:10.671413719 +0000 UTC m=+0.593557025,LastTimestamp:2026-03-14 01:22:10.671413719 +0000 UTC m=+0.593557025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ouubu.gb1.brightbox.com,}" Mar 14 01:22:10.681599 kubelet[2308]: I0314 01:22:10.681450 2308 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 01:22:10.690224 kubelet[2308]: I0314 01:22:10.689726 2308 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 01:22:10.690224 kubelet[2308]: E0314 01:22:10.690126 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-ouubu.gb1.brightbox.com\" not found" Mar 14 01:22:10.692070 kubelet[2308]: I0314 01:22:10.692041 2308 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 01:22:10.693880 kubelet[2308]: I0314 01:22:10.693855 2308 reconciler.go:26] "Reconciler: start to sync state" Mar 14 01:22:10.694528 kubelet[2308]: E0314 01:22:10.694488 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ouubu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.8.14:6443: connect: connection refused" interval="200ms" Mar 14 01:22:10.694754 kubelet[2308]: E0314 01:22:10.694700 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 01:22:10.697309 kubelet[2308]: I0314 01:22:10.696217 2308 factory.go:223] Registration of the systemd container factory successfully Mar 14 01:22:10.697309 kubelet[2308]: I0314 01:22:10.696382 2308 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 01:22:10.700253 kubelet[2308]: E0314 01:22:10.700226 2308 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 01:22:10.700896 kubelet[2308]: I0314 01:22:10.700310 2308 factory.go:223] Registration of the containerd container factory successfully Mar 14 01:22:10.729207 kubelet[2308]: I0314 01:22:10.729159 2308 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 01:22:10.731147 kubelet[2308]: I0314 01:22:10.731122 2308 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 01:22:10.731310 kubelet[2308]: I0314 01:22:10.731289 2308 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 01:22:10.731452 kubelet[2308]: I0314 01:22:10.731431 2308 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 01:22:10.731592 kubelet[2308]: I0314 01:22:10.731545 2308 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 01:22:10.731789 kubelet[2308]: E0314 01:22:10.731740 2308 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 01:22:10.739387 kubelet[2308]: E0314 01:22:10.739338 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 01:22:10.740818 kubelet[2308]: I0314 01:22:10.740791 2308 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 01:22:10.740818 kubelet[2308]: I0314 01:22:10.740815 2308 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 01:22:10.740957 kubelet[2308]: I0314 01:22:10.740846 2308 state_mem.go:36] "Initialized new in-memory state store" Mar 14 01:22:10.755843 kubelet[2308]: I0314 01:22:10.755799 2308 policy_none.go:49] "None policy: Start" Mar 14 01:22:10.755843 kubelet[2308]: I0314 01:22:10.755841 2308 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 01:22:10.756007 kubelet[2308]: I0314 01:22:10.755871 2308 state_mem.go:35] "Initializing new in-memory state store" Mar 14 01:22:10.764326 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 01:22:10.784529 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 01:22:10.790624 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 01:22:10.791785 kubelet[2308]: E0314 01:22:10.791342 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-ouubu.gb1.brightbox.com\" not found" Mar 14 01:22:10.800191 kubelet[2308]: E0314 01:22:10.800152 2308 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 01:22:10.800523 kubelet[2308]: I0314 01:22:10.800489 2308 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 01:22:10.800642 kubelet[2308]: I0314 01:22:10.800528 2308 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 01:22:10.801397 kubelet[2308]: I0314 01:22:10.801328 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 01:22:10.804319 kubelet[2308]: E0314 01:22:10.803459 2308 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 01:22:10.804319 kubelet[2308]: E0314 01:22:10.803535 2308 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-ouubu.gb1.brightbox.com\" not found" Mar 14 01:22:10.849430 systemd[1]: Created slice kubepods-burstable-pod1774c7dc2fd210d1532fc1b910310a53.slice - libcontainer container kubepods-burstable-pod1774c7dc2fd210d1532fc1b910310a53.slice. Mar 14 01:22:10.866249 kubelet[2308]: E0314 01:22:10.866161 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.871999 systemd[1]: Created slice kubepods-burstable-pod0b0afe2e00100adc6981843f6b2e8edd.slice - libcontainer container kubepods-burstable-pod0b0afe2e00100adc6981843f6b2e8edd.slice. Mar 14 01:22:10.887079 kubelet[2308]: E0314 01:22:10.886718 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.891720 systemd[1]: Created slice kubepods-burstable-pod5ce0c6d9d63ff4e59f5b7b9658d1e664.slice - libcontainer container kubepods-burstable-pod5ce0c6d9d63ff4e59f5b7b9658d1e664.slice. Mar 14 01:22:10.895607 kubelet[2308]: I0314 01:22:10.895498 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1774c7dc2fd210d1532fc1b910310a53-k8s-certs\") pod \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" (UID: \"1774c7dc2fd210d1532fc1b910310a53\") " pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.895607 kubelet[2308]: I0314 01:22:10.895571 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1774c7dc2fd210d1532fc1b910310a53-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" (UID: \"1774c7dc2fd210d1532fc1b910310a53\") " pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.895755 kubelet[2308]: I0314 01:22:10.895613 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-flexvolume-dir\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.895755 kubelet[2308]: I0314 01:22:10.895641 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-kubeconfig\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.895755 kubelet[2308]: I0314 01:22:10.895668 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.895755 kubelet[2308]: I0314 01:22:10.895691 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1774c7dc2fd210d1532fc1b910310a53-ca-certs\") pod \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" (UID: \"1774c7dc2fd210d1532fc1b910310a53\") " pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.895755 kubelet[2308]: I0314 01:22:10.895719 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-ca-certs\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.896084 kubelet[2308]: I0314 01:22:10.895746 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-k8s-certs\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.896084 kubelet[2308]: I0314 01:22:10.895786 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce0c6d9d63ff4e59f5b7b9658d1e664-kubeconfig\") pod \"kube-scheduler-srv-ouubu.gb1.brightbox.com\" (UID: \"5ce0c6d9d63ff4e59f5b7b9658d1e664\") " pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.896189 kubelet[2308]: E0314 01:22:10.896104 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ouubu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.8.14:6443: connect: connection refused" interval="400ms" Mar 14 01:22:10.896432 kubelet[2308]: E0314 01:22:10.896391 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.904339 kubelet[2308]: I0314 01:22:10.904287 2308 kubelet_node_status.go:75] "Attempting to register node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:10.904800 kubelet[2308]: E0314 01:22:10.904768 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.8.14:6443/api/v1/nodes\": dial tcp 10.230.8.14:6443: connect: connection refused" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:11.108597 kubelet[2308]: I0314 01:22:11.108432 2308 kubelet_node_status.go:75] "Attempting to register node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:11.109517 kubelet[2308]: E0314 01:22:11.108924 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.8.14:6443/api/v1/nodes\": dial tcp 10.230.8.14:6443: connect: connection refused" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:11.168533 containerd[1502]: time="2026-03-14T01:22:11.168481754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ouubu.gb1.brightbox.com,Uid:1774c7dc2fd210d1532fc1b910310a53,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:11.195292 containerd[1502]: time="2026-03-14T01:22:11.195250184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ouubu.gb1.brightbox.com,Uid:0b0afe2e00100adc6981843f6b2e8edd,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:11.198072 containerd[1502]: time="2026-03-14T01:22:11.198037188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ouubu.gb1.brightbox.com,Uid:5ce0c6d9d63ff4e59f5b7b9658d1e664,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:11.297641 kubelet[2308]: E0314 01:22:11.297559 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ouubu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.8.14:6443: connect: connection refused" interval="800ms" Mar 14 01:22:11.512163 kubelet[2308]: I0314 01:22:11.512069 2308 kubelet_node_status.go:75] "Attempting to register node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:11.512524 kubelet[2308]: E0314 01:22:11.512469 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.8.14:6443/api/v1/nodes\": dial tcp 10.230.8.14:6443: connect: connection refused" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:11.538437 kubelet[2308]: E0314 01:22:11.538347 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 01:22:11.839079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690637755.mount: Deactivated successfully. Mar 14 01:22:11.844893 containerd[1502]: time="2026-03-14T01:22:11.844823879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 01:22:11.847050 containerd[1502]: time="2026-03-14T01:22:11.846979276Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 14 01:22:11.849620 containerd[1502]: time="2026-03-14T01:22:11.849572415Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 01:22:11.851592 containerd[1502]: time="2026-03-14T01:22:11.851350290Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 01:22:11.851592 containerd[1502]: time="2026-03-14T01:22:11.851452887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 01:22:11.852612 containerd[1502]: time="2026-03-14T01:22:11.852578238Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 01:22:11.853092 containerd[1502]: time="2026-03-14T01:22:11.853018478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 01:22:11.857858 containerd[1502]: time="2026-03-14T01:22:11.857788115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 01:22:11.861091 containerd[1502]: time="2026-03-14T01:22:11.860827775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 692.210274ms" Mar 14 01:22:11.863621 containerd[1502]: time="2026-03-14T01:22:11.863006589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.621717ms" Mar 14 01:22:11.865734 containerd[1502]: time="2026-03-14T01:22:11.865699036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 667.595923ms" Mar 14 01:22:11.932829 kubelet[2308]: E0314 01:22:11.930439 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 01:22:11.996596 kubelet[2308]: E0314 01:22:11.996483 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ouubu.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 01:22:12.031666 containerd[1502]: time="2026-03-14T01:22:12.031284652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:12.031666 containerd[1502]: time="2026-03-14T01:22:12.031358094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:12.031666 containerd[1502]: time="2026-03-14T01:22:12.031388572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:12.031666 containerd[1502]: time="2026-03-14T01:22:12.031491892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:12.032071 containerd[1502]: time="2026-03-14T01:22:12.031192301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:12.032071 containerd[1502]: time="2026-03-14T01:22:12.031314326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:12.032071 containerd[1502]: time="2026-03-14T01:22:12.031343891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:12.032071 containerd[1502]: time="2026-03-14T01:22:12.031481096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:12.036750 containerd[1502]: time="2026-03-14T01:22:12.035736971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:12.037055 containerd[1502]: time="2026-03-14T01:22:12.036989175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:12.037245 containerd[1502]: time="2026-03-14T01:22:12.037194414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:12.039128 containerd[1502]: time="2026-03-14T01:22:12.039067229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:12.073328 systemd[1]: Started cri-containerd-83d1f685ecbb6be9dafff722cdeaebc6d85fb64a13b8fa05c807dfa74ab54f98.scope - libcontainer container 83d1f685ecbb6be9dafff722cdeaebc6d85fb64a13b8fa05c807dfa74ab54f98. Mar 14 01:22:12.085809 systemd[1]: Started cri-containerd-1bf2f6d5c9744394f817ee6b8d4c15351fe23bd7578b63bec85f748ce08ecf03.scope - libcontainer container 1bf2f6d5c9744394f817ee6b8d4c15351fe23bd7578b63bec85f748ce08ecf03. Mar 14 01:22:12.092036 systemd[1]: Started cri-containerd-47fe2f2f6186d94be1e2fcc4794e6c56ad4a8be7d6c1a4c3da1c8a543c78a5be.scope - libcontainer container 47fe2f2f6186d94be1e2fcc4794e6c56ad4a8be7d6c1a4c3da1c8a543c78a5be. Mar 14 01:22:12.099584 kubelet[2308]: E0314 01:22:12.099369 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ouubu.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.8.14:6443: connect: connection refused" interval="1.6s" Mar 14 01:22:12.139958 kubelet[2308]: E0314 01:22:12.139338 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 01:22:12.198861 containerd[1502]: time="2026-03-14T01:22:12.198614943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ouubu.gb1.brightbox.com,Uid:0b0afe2e00100adc6981843f6b2e8edd,Namespace:kube-system,Attempt:0,} returns sandbox id \"47fe2f2f6186d94be1e2fcc4794e6c56ad4a8be7d6c1a4c3da1c8a543c78a5be\"" Mar 14 01:22:12.215264 containerd[1502]: time="2026-03-14T01:22:12.214911592Z" level=info msg="CreateContainer within sandbox \"47fe2f2f6186d94be1e2fcc4794e6c56ad4a8be7d6c1a4c3da1c8a543c78a5be\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 01:22:12.216463 containerd[1502]: time="2026-03-14T01:22:12.216057155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ouubu.gb1.brightbox.com,Uid:1774c7dc2fd210d1532fc1b910310a53,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bf2f6d5c9744394f817ee6b8d4c15351fe23bd7578b63bec85f748ce08ecf03\"" Mar 14 01:22:12.217273 containerd[1502]: time="2026-03-14T01:22:12.217207703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ouubu.gb1.brightbox.com,Uid:5ce0c6d9d63ff4e59f5b7b9658d1e664,Namespace:kube-system,Attempt:0,} returns sandbox id \"83d1f685ecbb6be9dafff722cdeaebc6d85fb64a13b8fa05c807dfa74ab54f98\"" Mar 14 01:22:12.232734 containerd[1502]: time="2026-03-14T01:22:12.232496714Z" level=info msg="CreateContainer within sandbox \"1bf2f6d5c9744394f817ee6b8d4c15351fe23bd7578b63bec85f748ce08ecf03\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 01:22:12.233861 containerd[1502]: time="2026-03-14T01:22:12.233821564Z" level=info msg="CreateContainer within sandbox \"83d1f685ecbb6be9dafff722cdeaebc6d85fb64a13b8fa05c807dfa74ab54f98\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 01:22:12.252067 containerd[1502]: time="2026-03-14T01:22:12.251996412Z" level=info msg="CreateContainer within sandbox \"1bf2f6d5c9744394f817ee6b8d4c15351fe23bd7578b63bec85f748ce08ecf03\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea50871d267ea85d4a8a84debfb6d66f4cd8b3e945e2df58535922c11c1d8f4f\"" Mar 14 01:22:12.253347 containerd[1502]: time="2026-03-14T01:22:12.253273950Z" level=info msg="StartContainer for \"ea50871d267ea85d4a8a84debfb6d66f4cd8b3e945e2df58535922c11c1d8f4f\"" Mar 14 01:22:12.255046 containerd[1502]: time="2026-03-14T01:22:12.254862909Z" level=info msg="CreateContainer within sandbox \"83d1f685ecbb6be9dafff722cdeaebc6d85fb64a13b8fa05c807dfa74ab54f98\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7fc1c349fc502c641d754ca698d76aba31bbce73c50c0a1cd83c825649353634\"" Mar 14 01:22:12.255415 containerd[1502]: time="2026-03-14T01:22:12.255373468Z" level=info msg="CreateContainer within sandbox \"47fe2f2f6186d94be1e2fcc4794e6c56ad4a8be7d6c1a4c3da1c8a543c78a5be\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42f838b381d894dcb3de24adcd1fc3d171040acdab7588950e4d850c72ffac7c\"" Mar 14 01:22:12.255826 containerd[1502]: time="2026-03-14T01:22:12.255711720Z" level=info msg="StartContainer for \"7fc1c349fc502c641d754ca698d76aba31bbce73c50c0a1cd83c825649353634\"" Mar 14 01:22:12.257078 containerd[1502]: time="2026-03-14T01:22:12.257047485Z" level=info msg="StartContainer for \"42f838b381d894dcb3de24adcd1fc3d171040acdab7588950e4d850c72ffac7c\"" Mar 14 01:22:12.307120 systemd[1]: Started cri-containerd-ea50871d267ea85d4a8a84debfb6d66f4cd8b3e945e2df58535922c11c1d8f4f.scope - libcontainer container ea50871d267ea85d4a8a84debfb6d66f4cd8b3e945e2df58535922c11c1d8f4f. Mar 14 01:22:12.318226 systemd[1]: Started cri-containerd-42f838b381d894dcb3de24adcd1fc3d171040acdab7588950e4d850c72ffac7c.scope - libcontainer container 42f838b381d894dcb3de24adcd1fc3d171040acdab7588950e4d850c72ffac7c. Mar 14 01:22:12.321217 kubelet[2308]: I0314 01:22:12.320584 2308 kubelet_node_status.go:75] "Attempting to register node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:12.322248 kubelet[2308]: E0314 01:22:12.321685 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.8.14:6443/api/v1/nodes\": dial tcp 10.230.8.14:6443: connect: connection refused" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:12.343822 systemd[1]: Started cri-containerd-7fc1c349fc502c641d754ca698d76aba31bbce73c50c0a1cd83c825649353634.scope - libcontainer container 7fc1c349fc502c641d754ca698d76aba31bbce73c50c0a1cd83c825649353634. Mar 14 01:22:12.434589 containerd[1502]: time="2026-03-14T01:22:12.434070213Z" level=info msg="StartContainer for \"42f838b381d894dcb3de24adcd1fc3d171040acdab7588950e4d850c72ffac7c\" returns successfully" Mar 14 01:22:12.435198 containerd[1502]: time="2026-03-14T01:22:12.435001104Z" level=info msg="StartContainer for \"ea50871d267ea85d4a8a84debfb6d66f4cd8b3e945e2df58535922c11c1d8f4f\" returns successfully" Mar 14 01:22:12.470624 containerd[1502]: time="2026-03-14T01:22:12.470302113Z" level=info msg="StartContainer for \"7fc1c349fc502c641d754ca698d76aba31bbce73c50c0a1cd83c825649353634\" returns successfully" Mar 14 01:22:12.754952 kubelet[2308]: E0314 01:22:12.754885 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:12.757885 kubelet[2308]: E0314 01:22:12.757848 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:12.762276 kubelet[2308]: E0314 01:22:12.762251 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:12.772473 kubelet[2308]: E0314 01:22:12.772432 2308 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.8.14:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 01:22:13.769999 kubelet[2308]: E0314 01:22:13.769946 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:13.770542 kubelet[2308]: E0314 01:22:13.770521 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:13.772915 kubelet[2308]: E0314 01:22:13.772884 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:13.924441 kubelet[2308]: I0314 01:22:13.924402 2308 kubelet_node_status.go:75] "Attempting to register node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:15.992755 kubelet[2308]: E0314 01:22:15.992716 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.108844 kubelet[2308]: E0314 01:22:17.108789 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-ouubu.gb1.brightbox.com\" not found" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.159185 kubelet[2308]: E0314 01:22:17.159045 2308 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-ouubu.gb1.brightbox.com.189c9096cf1d09d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ouubu.gb1.brightbox.com,UID:srv-ouubu.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ouubu.gb1.brightbox.com,},FirstTimestamp:2026-03-14 01:22:10.671413719 +0000 UTC m=+0.593557025,LastTimestamp:2026-03-14 01:22:10.671413719 +0000 UTC m=+0.593557025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ouubu.gb1.brightbox.com,}" Mar 14 01:22:17.191136 kubelet[2308]: I0314 01:22:17.191077 2308 kubelet_node_status.go:78] "Successfully registered node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.191136 kubelet[2308]: E0314 01:22:17.191136 2308 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-ouubu.gb1.brightbox.com\": node \"srv-ouubu.gb1.brightbox.com\" not found" Mar 14 01:22:17.291126 kubelet[2308]: I0314 01:22:17.291057 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.302773 kubelet[2308]: E0314 01:22:17.302705 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-ouubu.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.302773 kubelet[2308]: I0314 01:22:17.302774 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.306793 kubelet[2308]: E0314 01:22:17.305655 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.306793 kubelet[2308]: I0314 01:22:17.305695 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.309527 kubelet[2308]: E0314 01:22:17.309486 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:17.663894 kubelet[2308]: I0314 01:22:17.663585 2308 apiserver.go:52] "Watching apiserver" Mar 14 01:22:17.692732 kubelet[2308]: I0314 01:22:17.692648 2308 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 01:22:18.888029 kubelet[2308]: I0314 01:22:18.887434 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:18.901945 kubelet[2308]: I0314 01:22:18.901315 2308 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 01:22:19.619655 systemd[1]: Reloading requested from client PID 2591 ('systemctl') (unit session-11.scope)... Mar 14 01:22:19.620211 systemd[1]: Reloading... Mar 14 01:22:19.746718 zram_generator::config[2639]: No configuration found. Mar 14 01:22:19.924138 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 01:22:20.062410 systemd[1]: Reloading finished in 441 ms. Mar 14 01:22:20.137024 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:22:20.152532 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 01:22:20.153092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:22:20.153364 systemd[1]: kubelet.service: Consumed 1.111s CPU time, 132.3M memory peak, 0B memory swap peak. Mar 14 01:22:20.166042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 01:22:20.484067 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 01:22:20.494227 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 01:22:20.582851 kubelet[2694]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 01:22:20.582851 kubelet[2694]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 01:22:20.582851 kubelet[2694]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 01:22:20.584512 kubelet[2694]: I0314 01:22:20.582837 2694 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 01:22:20.604871 kubelet[2694]: I0314 01:22:20.604823 2694 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 01:22:20.604871 kubelet[2694]: I0314 01:22:20.604864 2694 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 01:22:20.607325 kubelet[2694]: I0314 01:22:20.606810 2694 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 01:22:20.611789 kubelet[2694]: I0314 01:22:20.611657 2694 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 01:22:20.635600 kubelet[2694]: I0314 01:22:20.635313 2694 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 01:22:20.647123 kubelet[2694]: E0314 01:22:20.647030 2694 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 01:22:20.647123 kubelet[2694]: I0314 01:22:20.647117 2694 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 01:22:20.653363 sudo[2709]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 01:22:20.653997 sudo[2709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 01:22:20.658452 kubelet[2694]: I0314 01:22:20.658429 2694 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 01:22:20.659399 kubelet[2694]: I0314 01:22:20.659050 2694 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 01:22:20.659399 kubelet[2694]: I0314 01:22:20.659099 2694 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-ouubu.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 01:22:20.665979 kubelet[2694]: I0314 01:22:20.663490 2694 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 01:22:20.666108 kubelet[2694]: I0314 01:22:20.666089 2694 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 01:22:20.666690 kubelet[2694]: I0314 01:22:20.666259 2694 state_mem.go:36] "Initialized new in-memory state store" Mar 14 01:22:20.666850 kubelet[2694]: I0314 01:22:20.666829 2694 kubelet.go:480] "Attempting to sync node with API server" Mar 14 01:22:20.668079 kubelet[2694]: I0314 01:22:20.668059 2694 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 01:22:20.669483 kubelet[2694]: I0314 01:22:20.669302 2694 kubelet.go:386] "Adding apiserver pod source" Mar 14 01:22:20.673405 kubelet[2694]: I0314 01:22:20.670865 2694 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 01:22:20.679580 kubelet[2694]: I0314 01:22:20.678035 2694 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 01:22:20.680806 kubelet[2694]: I0314 01:22:20.680779 2694 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 01:22:20.706449 kubelet[2694]: I0314 01:22:20.704712 2694 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 01:22:20.706449 kubelet[2694]: I0314 01:22:20.704781 2694 server.go:1289] "Started kubelet" Mar 14 01:22:20.711214 kubelet[2694]: I0314 01:22:20.708942 2694 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 01:22:20.712744 kubelet[2694]: I0314 01:22:20.711986 2694 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 01:22:20.716788 kubelet[2694]: I0314 01:22:20.716657 2694 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 01:22:20.722028 kubelet[2694]: I0314 01:22:20.712341 2694 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 01:22:20.725670 kubelet[2694]: I0314 01:22:20.723502 2694 server.go:317] "Adding debug handlers to kubelet server" Mar 14 01:22:20.732212 kubelet[2694]: I0314 01:22:20.717107 2694 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 01:22:20.735720 kubelet[2694]: I0314 01:22:20.735470 2694 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 01:22:20.738501 kubelet[2694]: I0314 01:22:20.737916 2694 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 01:22:20.740488 kubelet[2694]: I0314 01:22:20.740077 2694 reconciler.go:26] "Reconciler: start to sync state" Mar 14 01:22:20.748620 kubelet[2694]: I0314 01:22:20.746088 2694 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 01:22:20.749347 kubelet[2694]: I0314 01:22:20.749028 2694 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 01:22:20.749700 kubelet[2694]: I0314 01:22:20.749565 2694 factory.go:223] Registration of the containerd container factory successfully Mar 14 01:22:20.749700 kubelet[2694]: I0314 01:22:20.749588 2694 factory.go:223] Registration of the systemd container factory successfully Mar 14 01:22:20.755611 kubelet[2694]: I0314 01:22:20.754867 2694 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 01:22:20.755611 kubelet[2694]: I0314 01:22:20.754895 2694 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 01:22:20.755611 kubelet[2694]: I0314 01:22:20.754953 2694 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 01:22:20.755611 kubelet[2694]: I0314 01:22:20.754978 2694 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 01:22:20.755611 kubelet[2694]: E0314 01:22:20.755042 2694 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 01:22:20.857847 kubelet[2694]: E0314 01:22:20.855083 2694 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858421 2694 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858476 2694 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858504 2694 state_mem.go:36] "Initialized new in-memory state store" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858816 2694 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858835 2694 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858869 2694 policy_none.go:49] "None policy: Start" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858938 2694 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.858967 2694 state_mem.go:35] "Initializing new in-memory state store" Mar 14 01:22:20.859522 kubelet[2694]: I0314 01:22:20.859139 2694 state_mem.go:75] "Updated machine memory state" Mar 14 01:22:20.868442 kubelet[2694]: E0314 01:22:20.868250 2694 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 01:22:20.868539 kubelet[2694]: I0314 01:22:20.868478 2694 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 01:22:20.868539 kubelet[2694]: I0314 01:22:20.868497 2694 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 01:22:20.870418 kubelet[2694]: I0314 01:22:20.869667 2694 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 01:22:20.878158 kubelet[2694]: E0314 01:22:20.878131 2694 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 01:22:20.996711 kubelet[2694]: I0314 01:22:20.995720 2694 kubelet_node_status.go:75] "Attempting to register node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.014755 kubelet[2694]: I0314 01:22:21.014674 2694 kubelet_node_status.go:124] "Node was previously registered" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.063848 kubelet[2694]: I0314 01:22:21.014799 2694 kubelet_node_status.go:78] "Successfully registered node" node="srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.063848 kubelet[2694]: I0314 01:22:21.057222 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.063848 kubelet[2694]: I0314 01:22:21.058840 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.063848 kubelet[2694]: I0314 01:22:21.059353 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.071812 kubelet[2694]: I0314 01:22:21.071256 2694 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 01:22:21.074241 kubelet[2694]: I0314 01:22:21.073801 2694 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 01:22:21.080815 kubelet[2694]: I0314 01:22:21.080782 2694 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 01:22:21.080982 kubelet[2694]: E0314 01:22:21.080851 2694 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-ouubu.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.153402 kubelet[2694]: I0314 01:22:21.153157 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ce0c6d9d63ff4e59f5b7b9658d1e664-kubeconfig\") pod \"kube-scheduler-srv-ouubu.gb1.brightbox.com\" (UID: \"5ce0c6d9d63ff4e59f5b7b9658d1e664\") " pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.153742 kubelet[2694]: I0314 01:22:21.153553 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-ca-certs\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.154025 kubelet[2694]: I0314 01:22:21.153745 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-flexvolume-dir\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.154392 kubelet[2694]: I0314 01:22:21.154046 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1774c7dc2fd210d1532fc1b910310a53-ca-certs\") pod \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" (UID: \"1774c7dc2fd210d1532fc1b910310a53\") " pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.154392 kubelet[2694]: I0314 01:22:21.154083 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1774c7dc2fd210d1532fc1b910310a53-k8s-certs\") pod \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" (UID: \"1774c7dc2fd210d1532fc1b910310a53\") " pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.155707 kubelet[2694]: I0314 01:22:21.154276 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1774c7dc2fd210d1532fc1b910310a53-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" (UID: \"1774c7dc2fd210d1532fc1b910310a53\") " pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.155707 kubelet[2694]: I0314 01:22:21.154675 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-k8s-certs\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.155707 kubelet[2694]: I0314 01:22:21.154713 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-kubeconfig\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.155707 kubelet[2694]: I0314 01:22:21.154877 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b0afe2e00100adc6981843f6b2e8edd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ouubu.gb1.brightbox.com\" (UID: \"0b0afe2e00100adc6981843f6b2e8edd\") " pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.536402 sudo[2709]: pam_unix(sudo:session): session closed for user root Mar 14 01:22:21.674044 kubelet[2694]: I0314 01:22:21.673621 2694 apiserver.go:52] "Watching apiserver" Mar 14 01:22:21.740114 kubelet[2694]: I0314 01:22:21.740065 2694 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 01:22:21.812007 kubelet[2694]: I0314 01:22:21.810373 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-ouubu.gb1.brightbox.com" podStartSLOduration=3.810345278 podStartE2EDuration="3.810345278s" podCreationTimestamp="2026-03-14 01:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 01:22:21.805335905 +0000 UTC m=+1.300104305" watchObservedRunningTime="2026-03-14 01:22:21.810345278 +0000 UTC m=+1.305113672" Mar 14 01:22:21.816594 kubelet[2694]: I0314 01:22:21.814480 2694 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.823545 kubelet[2694]: I0314 01:22:21.823518 2694 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 14 01:22:21.823967 kubelet[2694]: E0314 01:22:21.823739 2694 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-ouubu.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" Mar 14 01:22:21.841257 kubelet[2694]: I0314 01:22:21.840959 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-ouubu.gb1.brightbox.com" podStartSLOduration=0.840939678 podStartE2EDuration="840.939678ms" podCreationTimestamp="2026-03-14 01:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 01:22:21.839927854 +0000 UTC m=+1.334696252" watchObservedRunningTime="2026-03-14 01:22:21.840939678 +0000 UTC m=+1.335708066" Mar 14 01:22:21.841257 kubelet[2694]: I0314 01:22:21.841088 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-ouubu.gb1.brightbox.com" podStartSLOduration=0.84107989 podStartE2EDuration="841.07989ms" podCreationTimestamp="2026-03-14 01:22:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 01:22:21.826169856 +0000 UTC m=+1.320938257" watchObservedRunningTime="2026-03-14 01:22:21.84107989 +0000 UTC m=+1.335848306" Mar 14 01:22:23.551099 sudo[1758]: pam_unix(sudo:session): session closed for user root Mar 14 01:22:23.642030 sshd[1755]: pam_unix(sshd:session): session closed for user core Mar 14 01:22:23.648460 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Mar 14 01:22:23.650514 systemd[1]: sshd@8-10.230.8.14:22-20.161.92.111:51520.service: Deactivated successfully. Mar 14 01:22:23.654307 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 01:22:23.654954 systemd[1]: session-11.scope: Consumed 7.784s CPU time, 146.1M memory peak, 0B memory swap peak. Mar 14 01:22:23.658807 systemd-logind[1484]: Removed session 11. Mar 14 01:22:25.256754 kubelet[2694]: I0314 01:22:25.256698 2694 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 01:22:25.258597 kubelet[2694]: I0314 01:22:25.257635 2694 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 01:22:25.258697 containerd[1502]: time="2026-03-14T01:22:25.257315482Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 01:22:25.979618 systemd[1]: Created slice kubepods-besteffort-pod71258554_aee5_4fd1_a6e2_94d488edd18a.slice - libcontainer container kubepods-besteffort-pod71258554_aee5_4fd1_a6e2_94d488edd18a.slice. Mar 14 01:22:25.990855 kubelet[2694]: I0314 01:22:25.990483 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wmn2\" (UniqueName: \"kubernetes.io/projected/71258554-aee5-4fd1-a6e2-94d488edd18a-kube-api-access-2wmn2\") pod \"kube-proxy-vzppn\" (UID: \"71258554-aee5-4fd1-a6e2-94d488edd18a\") " pod="kube-system/kube-proxy-vzppn" Mar 14 01:22:25.990855 kubelet[2694]: I0314 01:22:25.990545 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71258554-aee5-4fd1-a6e2-94d488edd18a-kube-proxy\") pod \"kube-proxy-vzppn\" (UID: \"71258554-aee5-4fd1-a6e2-94d488edd18a\") " pod="kube-system/kube-proxy-vzppn" Mar 14 01:22:25.990855 kubelet[2694]: I0314 01:22:25.990609 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71258554-aee5-4fd1-a6e2-94d488edd18a-xtables-lock\") pod \"kube-proxy-vzppn\" (UID: \"71258554-aee5-4fd1-a6e2-94d488edd18a\") " pod="kube-system/kube-proxy-vzppn" Mar 14 01:22:25.990855 kubelet[2694]: I0314 01:22:25.990642 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71258554-aee5-4fd1-a6e2-94d488edd18a-lib-modules\") pod \"kube-proxy-vzppn\" (UID: \"71258554-aee5-4fd1-a6e2-94d488edd18a\") " pod="kube-system/kube-proxy-vzppn" Mar 14 01:22:26.000335 systemd[1]: Created slice kubepods-burstable-pod5fff5aa8_0279_47b8_ad25_38b29d746fa1.slice - libcontainer container kubepods-burstable-pod5fff5aa8_0279_47b8_ad25_38b29d746fa1.slice. Mar 14 01:22:26.093065 kubelet[2694]: I0314 01:22:26.090924 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-bpf-maps\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093065 kubelet[2694]: I0314 01:22:26.091632 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-xtables-lock\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093065 kubelet[2694]: I0314 01:22:26.091715 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fff5aa8-0279-47b8-ad25-38b29d746fa1-clustermesh-secrets\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093065 kubelet[2694]: I0314 01:22:26.091747 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-kernel\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093065 kubelet[2694]: I0314 01:22:26.091775 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-lib-modules\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093065 kubelet[2694]: I0314 01:22:26.091799 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-config-path\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093475 kubelet[2694]: I0314 01:22:26.091824 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-net\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093475 kubelet[2694]: I0314 01:22:26.091863 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hubble-tls\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093475 kubelet[2694]: I0314 01:22:26.091894 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccfx\" (UniqueName: \"kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-kube-api-access-mccfx\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093475 kubelet[2694]: I0314 01:22:26.091925 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-run\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093475 kubelet[2694]: I0314 01:22:26.091952 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hostproc\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093475 kubelet[2694]: I0314 01:22:26.091977 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-cgroup\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093837 kubelet[2694]: I0314 01:22:26.092001 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cni-path\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.093837 kubelet[2694]: I0314 01:22:26.092026 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-etc-cni-netd\") pod \"cilium-fljtq\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " pod="kube-system/cilium-fljtq" Mar 14 01:22:26.291912 containerd[1502]: time="2026-03-14T01:22:26.291680992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vzppn,Uid:71258554-aee5-4fd1-a6e2-94d488edd18a,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:26.307040 containerd[1502]: time="2026-03-14T01:22:26.306986460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fljtq,Uid:5fff5aa8-0279-47b8-ad25-38b29d746fa1,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:26.354680 containerd[1502]: time="2026-03-14T01:22:26.354349785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:26.354680 containerd[1502]: time="2026-03-14T01:22:26.354464083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:26.354680 containerd[1502]: time="2026-03-14T01:22:26.354489299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:26.355275 containerd[1502]: time="2026-03-14T01:22:26.354938009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:26.386773 systemd[1]: Started cri-containerd-b7b7756d68269d712452471f264b7f3166fcd8379a5c6592cd01be9fab1d992d.scope - libcontainer container b7b7756d68269d712452471f264b7f3166fcd8379a5c6592cd01be9fab1d992d. Mar 14 01:22:26.428802 containerd[1502]: time="2026-03-14T01:22:26.427711220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vzppn,Uid:71258554-aee5-4fd1-a6e2-94d488edd18a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7b7756d68269d712452471f264b7f3166fcd8379a5c6592cd01be9fab1d992d\"" Mar 14 01:22:26.450169 containerd[1502]: time="2026-03-14T01:22:26.449962039Z" level=info msg="CreateContainer within sandbox \"b7b7756d68269d712452471f264b7f3166fcd8379a5c6592cd01be9fab1d992d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 01:22:26.474640 containerd[1502]: time="2026-03-14T01:22:26.474031474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:26.474640 containerd[1502]: time="2026-03-14T01:22:26.474194524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:26.474640 containerd[1502]: time="2026-03-14T01:22:26.474364621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:26.477847 containerd[1502]: time="2026-03-14T01:22:26.474790704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:26.504099 systemd[1]: Created slice kubepods-besteffort-pod9754c666_f628_44ae_a769_2a3bf6995f38.slice - libcontainer container kubepods-besteffort-pod9754c666_f628_44ae_a769_2a3bf6995f38.slice. Mar 14 01:22:26.533162 systemd[1]: Started cri-containerd-042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114.scope - libcontainer container 042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114. Mar 14 01:22:26.567351 containerd[1502]: time="2026-03-14T01:22:26.565541932Z" level=info msg="CreateContainer within sandbox \"b7b7756d68269d712452471f264b7f3166fcd8379a5c6592cd01be9fab1d992d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2f256399d57174573039e55fc50700119f855884b81b55941e2d742aa1101ff\"" Mar 14 01:22:26.569358 containerd[1502]: time="2026-03-14T01:22:26.568498965Z" level=info msg="StartContainer for \"b2f256399d57174573039e55fc50700119f855884b81b55941e2d742aa1101ff\"" Mar 14 01:22:26.596910 kubelet[2694]: I0314 01:22:26.596744 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n68s\" (UniqueName: \"kubernetes.io/projected/9754c666-f628-44ae-a769-2a3bf6995f38-kube-api-access-2n68s\") pod \"cilium-operator-6c4d7847fc-fpf7k\" (UID: \"9754c666-f628-44ae-a769-2a3bf6995f38\") " pod="kube-system/cilium-operator-6c4d7847fc-fpf7k" Mar 14 01:22:26.596910 kubelet[2694]: I0314 01:22:26.596822 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9754c666-f628-44ae-a769-2a3bf6995f38-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-fpf7k\" (UID: \"9754c666-f628-44ae-a769-2a3bf6995f38\") " pod="kube-system/cilium-operator-6c4d7847fc-fpf7k" Mar 14 01:22:26.613976 containerd[1502]: time="2026-03-14T01:22:26.613496377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fljtq,Uid:5fff5aa8-0279-47b8-ad25-38b29d746fa1,Namespace:kube-system,Attempt:0,} returns sandbox id \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\"" Mar 14 01:22:26.618623 containerd[1502]: time="2026-03-14T01:22:26.616370138Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 01:22:26.633828 systemd[1]: Started cri-containerd-b2f256399d57174573039e55fc50700119f855884b81b55941e2d742aa1101ff.scope - libcontainer container b2f256399d57174573039e55fc50700119f855884b81b55941e2d742aa1101ff. Mar 14 01:22:26.681927 containerd[1502]: time="2026-03-14T01:22:26.681873945Z" level=info msg="StartContainer for \"b2f256399d57174573039e55fc50700119f855884b81b55941e2d742aa1101ff\" returns successfully" Mar 14 01:22:26.815149 containerd[1502]: time="2026-03-14T01:22:26.814871053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fpf7k,Uid:9754c666-f628-44ae-a769-2a3bf6995f38,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:26.854674 kubelet[2694]: I0314 01:22:26.854401 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vzppn" podStartSLOduration=1.854346496 podStartE2EDuration="1.854346496s" podCreationTimestamp="2026-03-14 01:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 01:22:26.854238988 +0000 UTC m=+6.349007389" watchObservedRunningTime="2026-03-14 01:22:26.854346496 +0000 UTC m=+6.349114890" Mar 14 01:22:26.876909 containerd[1502]: time="2026-03-14T01:22:26.875664678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:26.876909 containerd[1502]: time="2026-03-14T01:22:26.875756080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:26.876909 containerd[1502]: time="2026-03-14T01:22:26.875790251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:26.876909 containerd[1502]: time="2026-03-14T01:22:26.875938712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:26.907783 systemd[1]: Started cri-containerd-578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f.scope - libcontainer container 578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f. Mar 14 01:22:26.987725 containerd[1502]: time="2026-03-14T01:22:26.987674243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-fpf7k,Uid:9754c666-f628-44ae-a769-2a3bf6995f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f\"" Mar 14 01:22:27.357048 systemd[1]: Started sshd@9-10.230.8.14:22-36.79.129.86:33668.service - OpenSSH per-connection server daemon (36.79.129.86:33668). Mar 14 01:22:28.467216 sshd[3022]: Invalid user task from 36.79.129.86 port 33668 Mar 14 01:22:28.678822 sshd[3022]: Received disconnect from 36.79.129.86 port 33668:11: Bye Bye [preauth] Mar 14 01:22:28.678822 sshd[3022]: Disconnected from invalid user task 36.79.129.86 port 33668 [preauth] Mar 14 01:22:28.682800 systemd[1]: sshd@9-10.230.8.14:22-36.79.129.86:33668.service: Deactivated successfully. Mar 14 01:22:34.364910 systemd[1]: Started sshd@10-10.230.8.14:22-85.206.171.113:41466.service - OpenSSH per-connection server daemon (85.206.171.113:41466). Mar 14 01:22:34.795783 sshd[3079]: Invalid user test01 from 85.206.171.113 port 41466 Mar 14 01:22:34.854602 sshd[3079]: Received disconnect from 85.206.171.113 port 41466:11: Bye Bye [preauth] Mar 14 01:22:34.854602 sshd[3079]: Disconnected from invalid user test01 85.206.171.113 port 41466 [preauth] Mar 14 01:22:34.858706 systemd[1]: sshd@10-10.230.8.14:22-85.206.171.113:41466.service: Deactivated successfully. Mar 14 01:22:35.477767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3594345871.mount: Deactivated successfully. Mar 14 01:22:38.875634 containerd[1502]: time="2026-03-14T01:22:38.875423964Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:38.876990 containerd[1502]: time="2026-03-14T01:22:38.876764531Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 01:22:38.878603 containerd[1502]: time="2026-03-14T01:22:38.877935672Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:38.880913 containerd[1502]: time="2026-03-14T01:22:38.880730992Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.264296058s" Mar 14 01:22:38.880913 containerd[1502]: time="2026-03-14T01:22:38.880779308Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 01:22:38.883291 containerd[1502]: time="2026-03-14T01:22:38.882802515Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 01:22:38.892471 containerd[1502]: time="2026-03-14T01:22:38.892418512Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 01:22:38.987739 containerd[1502]: time="2026-03-14T01:22:38.987304683Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\"" Mar 14 01:22:38.988644 containerd[1502]: time="2026-03-14T01:22:38.988239647Z" level=info msg="StartContainer for \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\"" Mar 14 01:22:39.226891 systemd[1]: Started cri-containerd-00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24.scope - libcontainer container 00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24. Mar 14 01:22:39.280207 containerd[1502]: time="2026-03-14T01:22:39.279914160Z" level=info msg="StartContainer for \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\" returns successfully" Mar 14 01:22:39.297872 systemd[1]: cri-containerd-00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24.scope: Deactivated successfully. Mar 14 01:22:39.531314 containerd[1502]: time="2026-03-14T01:22:39.521356512Z" level=info msg="shim disconnected" id=00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24 namespace=k8s.io Mar 14 01:22:39.531314 containerd[1502]: time="2026-03-14T01:22:39.530830556Z" level=warning msg="cleaning up after shim disconnected" id=00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24 namespace=k8s.io Mar 14 01:22:39.531314 containerd[1502]: time="2026-03-14T01:22:39.530851805Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:22:39.891042 containerd[1502]: time="2026-03-14T01:22:39.890683584Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 01:22:39.921923 containerd[1502]: time="2026-03-14T01:22:39.921496198Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\"" Mar 14 01:22:39.924111 containerd[1502]: time="2026-03-14T01:22:39.924038024Z" level=info msg="StartContainer for \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\"" Mar 14 01:22:39.965735 systemd[1]: Started cri-containerd-f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2.scope - libcontainer container f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2. Mar 14 01:22:39.973113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24-rootfs.mount: Deactivated successfully. Mar 14 01:22:40.013136 containerd[1502]: time="2026-03-14T01:22:40.013090011Z" level=info msg="StartContainer for \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\" returns successfully" Mar 14 01:22:40.031836 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 01:22:40.032197 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 01:22:40.032318 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 01:22:40.041728 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 01:22:40.042633 systemd[1]: cri-containerd-f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2.scope: Deactivated successfully. Mar 14 01:22:40.075137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2-rootfs.mount: Deactivated successfully. Mar 14 01:22:40.098131 containerd[1502]: time="2026-03-14T01:22:40.098011383Z" level=info msg="shim disconnected" id=f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2 namespace=k8s.io Mar 14 01:22:40.098639 containerd[1502]: time="2026-03-14T01:22:40.098106499Z" level=warning msg="cleaning up after shim disconnected" id=f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2 namespace=k8s.io Mar 14 01:22:40.098639 containerd[1502]: time="2026-03-14T01:22:40.098355625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:22:40.131864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 01:22:40.575806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534472491.mount: Deactivated successfully. Mar 14 01:22:40.898607 containerd[1502]: time="2026-03-14T01:22:40.898371036Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 01:22:40.938399 containerd[1502]: time="2026-03-14T01:22:40.938323948Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\"" Mar 14 01:22:40.940484 containerd[1502]: time="2026-03-14T01:22:40.940018593Z" level=info msg="StartContainer for \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\"" Mar 14 01:22:41.024775 systemd[1]: Started cri-containerd-a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f.scope - libcontainer container a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f. Mar 14 01:22:41.124028 containerd[1502]: time="2026-03-14T01:22:41.123964153Z" level=info msg="StartContainer for \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\" returns successfully" Mar 14 01:22:41.139204 systemd[1]: cri-containerd-a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f.scope: Deactivated successfully. Mar 14 01:22:41.195544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f-rootfs.mount: Deactivated successfully. Mar 14 01:22:41.200626 containerd[1502]: time="2026-03-14T01:22:41.200313568Z" level=info msg="shim disconnected" id=a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f namespace=k8s.io Mar 14 01:22:41.200626 containerd[1502]: time="2026-03-14T01:22:41.200396217Z" level=warning msg="cleaning up after shim disconnected" id=a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f namespace=k8s.io Mar 14 01:22:41.200626 containerd[1502]: time="2026-03-14T01:22:41.200411130Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:22:41.609274 containerd[1502]: time="2026-03-14T01:22:41.607953740Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:41.609274 containerd[1502]: time="2026-03-14T01:22:41.609118238Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 01:22:41.610261 containerd[1502]: time="2026-03-14T01:22:41.610184865Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 01:22:41.612662 containerd[1502]: time="2026-03-14T01:22:41.612619674Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.72977163s" Mar 14 01:22:41.612906 containerd[1502]: time="2026-03-14T01:22:41.612777015Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 01:22:41.618322 containerd[1502]: time="2026-03-14T01:22:41.618174714Z" level=info msg="CreateContainer within sandbox \"578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 01:22:41.637030 containerd[1502]: time="2026-03-14T01:22:41.636974613Z" level=info msg="CreateContainer within sandbox \"578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\"" Mar 14 01:22:41.638063 containerd[1502]: time="2026-03-14T01:22:41.638033058Z" level=info msg="StartContainer for \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\"" Mar 14 01:22:41.677909 systemd[1]: Started cri-containerd-15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae.scope - libcontainer container 15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae. Mar 14 01:22:41.724927 containerd[1502]: time="2026-03-14T01:22:41.724828658Z" level=info msg="StartContainer for \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\" returns successfully" Mar 14 01:22:41.899228 containerd[1502]: time="2026-03-14T01:22:41.899070883Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 01:22:41.925683 containerd[1502]: time="2026-03-14T01:22:41.925610363Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\"" Mar 14 01:22:41.927442 containerd[1502]: time="2026-03-14T01:22:41.927407191Z" level=info msg="StartContainer for \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\"" Mar 14 01:22:42.021797 systemd[1]: Started cri-containerd-45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5.scope - libcontainer container 45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5. Mar 14 01:22:42.156831 systemd[1]: cri-containerd-45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5.scope: Deactivated successfully. Mar 14 01:22:42.193625 containerd[1502]: time="2026-03-14T01:22:42.193109929Z" level=info msg="StartContainer for \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\" returns successfully" Mar 14 01:22:42.232313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5-rootfs.mount: Deactivated successfully. Mar 14 01:22:42.312005 containerd[1502]: time="2026-03-14T01:22:42.311874948Z" level=info msg="shim disconnected" id=45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5 namespace=k8s.io Mar 14 01:22:42.312005 containerd[1502]: time="2026-03-14T01:22:42.311986864Z" level=warning msg="cleaning up after shim disconnected" id=45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5 namespace=k8s.io Mar 14 01:22:42.312005 containerd[1502]: time="2026-03-14T01:22:42.312009936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:22:42.900836 containerd[1502]: time="2026-03-14T01:22:42.900763661Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 01:22:42.935073 containerd[1502]: time="2026-03-14T01:22:42.934982370Z" level=info msg="CreateContainer within sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\"" Mar 14 01:22:42.937158 containerd[1502]: time="2026-03-14T01:22:42.935861257Z" level=info msg="StartContainer for \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\"" Mar 14 01:22:43.006744 systemd[1]: run-containerd-runc-k8s.io-f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084-runc.JpiIDf.mount: Deactivated successfully. Mar 14 01:22:43.028774 systemd[1]: Started cri-containerd-f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084.scope - libcontainer container f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084. Mar 14 01:22:43.100684 kubelet[2694]: I0314 01:22:43.099033 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-fpf7k" podStartSLOduration=2.476605794 podStartE2EDuration="17.098374557s" podCreationTimestamp="2026-03-14 01:22:26 +0000 UTC" firstStartedPulling="2026-03-14 01:22:26.991969022 +0000 UTC m=+6.486737406" lastFinishedPulling="2026-03-14 01:22:41.613737782 +0000 UTC m=+21.108506169" observedRunningTime="2026-03-14 01:22:42.174818302 +0000 UTC m=+21.669586701" watchObservedRunningTime="2026-03-14 01:22:43.098374557 +0000 UTC m=+22.593142958" Mar 14 01:22:43.131197 containerd[1502]: time="2026-03-14T01:22:43.131132534Z" level=info msg="StartContainer for \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\" returns successfully" Mar 14 01:22:43.417354 kubelet[2694]: I0314 01:22:43.417298 2694 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 01:22:43.505100 systemd[1]: Created slice kubepods-burstable-pod087c41ea_1363_4214_84e9_ce2581b4c3a0.slice - libcontainer container kubepods-burstable-pod087c41ea_1363_4214_84e9_ce2581b4c3a0.slice. Mar 14 01:22:43.522147 systemd[1]: Created slice kubepods-burstable-pod476fb6b2_7d04_4c8a_9a5c_466bd99f6416.slice - libcontainer container kubepods-burstable-pod476fb6b2_7d04_4c8a_9a5c_466bd99f6416.slice. Mar 14 01:22:43.589408 kubelet[2694]: I0314 01:22:43.589350 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/087c41ea-1363-4214-84e9-ce2581b4c3a0-config-volume\") pod \"coredns-674b8bbfcf-cjkpn\" (UID: \"087c41ea-1363-4214-84e9-ce2581b4c3a0\") " pod="kube-system/coredns-674b8bbfcf-cjkpn" Mar 14 01:22:43.589680 kubelet[2694]: I0314 01:22:43.589419 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdb5v\" (UniqueName: \"kubernetes.io/projected/476fb6b2-7d04-4c8a-9a5c-466bd99f6416-kube-api-access-pdb5v\") pod \"coredns-674b8bbfcf-gn9qp\" (UID: \"476fb6b2-7d04-4c8a-9a5c-466bd99f6416\") " pod="kube-system/coredns-674b8bbfcf-gn9qp" Mar 14 01:22:43.589680 kubelet[2694]: I0314 01:22:43.589478 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/476fb6b2-7d04-4c8a-9a5c-466bd99f6416-config-volume\") pod \"coredns-674b8bbfcf-gn9qp\" (UID: \"476fb6b2-7d04-4c8a-9a5c-466bd99f6416\") " pod="kube-system/coredns-674b8bbfcf-gn9qp" Mar 14 01:22:43.589680 kubelet[2694]: I0314 01:22:43.589511 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b68v\" (UniqueName: \"kubernetes.io/projected/087c41ea-1363-4214-84e9-ce2581b4c3a0-kube-api-access-6b68v\") pod \"coredns-674b8bbfcf-cjkpn\" (UID: \"087c41ea-1363-4214-84e9-ce2581b4c3a0\") " pod="kube-system/coredns-674b8bbfcf-cjkpn" Mar 14 01:22:43.815635 containerd[1502]: time="2026-03-14T01:22:43.815569280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cjkpn,Uid:087c41ea-1363-4214-84e9-ce2581b4c3a0,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:43.827968 containerd[1502]: time="2026-03-14T01:22:43.827826650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gn9qp,Uid:476fb6b2-7d04-4c8a-9a5c-466bd99f6416,Namespace:kube-system,Attempt:0,}" Mar 14 01:22:44.033348 kubelet[2694]: I0314 01:22:44.033073 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fljtq" podStartSLOduration=6.7660935779999996 podStartE2EDuration="19.033050517s" podCreationTimestamp="2026-03-14 01:22:25 +0000 UTC" firstStartedPulling="2026-03-14 01:22:26.615650139 +0000 UTC m=+6.110418527" lastFinishedPulling="2026-03-14 01:22:38.882607077 +0000 UTC m=+18.377375466" observedRunningTime="2026-03-14 01:22:44.032317293 +0000 UTC m=+23.527085693" watchObservedRunningTime="2026-03-14 01:22:44.033050517 +0000 UTC m=+23.527818927" Mar 14 01:22:44.833539 systemd[1]: Started sshd@11-10.230.8.14:22-77.87.40.114:51518.service - OpenSSH per-connection server daemon (77.87.40.114:51518). Mar 14 01:22:45.276734 sshd[3537]: Received disconnect from 77.87.40.114 port 51518:11: Bye Bye [preauth] Mar 14 01:22:45.276734 sshd[3537]: Disconnected from authenticating user root 77.87.40.114 port 51518 [preauth] Mar 14 01:22:45.279613 systemd[1]: sshd@11-10.230.8.14:22-77.87.40.114:51518.service: Deactivated successfully. Mar 14 01:22:46.197776 systemd-networkd[1419]: cilium_host: Link UP Mar 14 01:22:46.199284 systemd-networkd[1419]: cilium_net: Link UP Mar 14 01:22:46.200998 systemd-networkd[1419]: cilium_net: Gained carrier Mar 14 01:22:46.202226 systemd-networkd[1419]: cilium_host: Gained carrier Mar 14 01:22:46.327710 systemd-networkd[1419]: cilium_host: Gained IPv6LL Mar 14 01:22:46.384042 systemd-networkd[1419]: cilium_vxlan: Link UP Mar 14 01:22:46.384056 systemd-networkd[1419]: cilium_vxlan: Gained carrier Mar 14 01:22:47.009603 kernel: NET: Registered PF_ALG protocol family Mar 14 01:22:47.057152 systemd-networkd[1419]: cilium_net: Gained IPv6LL Mar 14 01:22:47.503929 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL Mar 14 01:22:48.128521 systemd-networkd[1419]: lxc_health: Link UP Mar 14 01:22:48.134042 systemd-networkd[1419]: lxc_health: Gained carrier Mar 14 01:22:48.502730 systemd-networkd[1419]: lxc8144129bec97: Link UP Mar 14 01:22:48.519697 kernel: eth0: renamed from tmp45854 Mar 14 01:22:48.538015 systemd-networkd[1419]: lxc8144129bec97: Gained carrier Mar 14 01:22:48.634934 kernel: eth0: renamed from tmp9db04 Mar 14 01:22:48.640298 systemd-networkd[1419]: lxc8d67196906b1: Link UP Mar 14 01:22:48.641521 systemd-networkd[1419]: lxc8d67196906b1: Gained carrier Mar 14 01:22:49.679951 systemd-networkd[1419]: lxc8144129bec97: Gained IPv6LL Mar 14 01:22:49.807887 systemd-networkd[1419]: lxc8d67196906b1: Gained IPv6LL Mar 14 01:22:49.999947 systemd-networkd[1419]: lxc_health: Gained IPv6LL Mar 14 01:22:54.407962 containerd[1502]: time="2026-03-14T01:22:54.407199844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:54.407962 containerd[1502]: time="2026-03-14T01:22:54.407280429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:54.407962 containerd[1502]: time="2026-03-14T01:22:54.407326947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:54.407962 containerd[1502]: time="2026-03-14T01:22:54.407450719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:54.487396 containerd[1502]: time="2026-03-14T01:22:54.486076158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:22:54.487396 containerd[1502]: time="2026-03-14T01:22:54.486608988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:22:54.487396 containerd[1502]: time="2026-03-14T01:22:54.486632373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:54.487396 containerd[1502]: time="2026-03-14T01:22:54.486799509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:22:54.493809 systemd[1]: Started cri-containerd-9db044fbee436787534b3d67e6a0804472c7a394185824bae07db93c9f3fdf9c.scope - libcontainer container 9db044fbee436787534b3d67e6a0804472c7a394185824bae07db93c9f3fdf9c. Mar 14 01:22:54.563776 systemd[1]: Started cri-containerd-45854a0249cd1c8b5644c0fe77fb2b0474e81885ca62983b5a25ca05d178fa42.scope - libcontainer container 45854a0249cd1c8b5644c0fe77fb2b0474e81885ca62983b5a25ca05d178fa42. Mar 14 01:22:54.680054 containerd[1502]: time="2026-03-14T01:22:54.679998291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cjkpn,Uid:087c41ea-1363-4214-84e9-ce2581b4c3a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9db044fbee436787534b3d67e6a0804472c7a394185824bae07db93c9f3fdf9c\"" Mar 14 01:22:54.692770 containerd[1502]: time="2026-03-14T01:22:54.692714859Z" level=info msg="CreateContainer within sandbox \"9db044fbee436787534b3d67e6a0804472c7a394185824bae07db93c9f3fdf9c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 01:22:54.723535 containerd[1502]: time="2026-03-14T01:22:54.723413871Z" level=info msg="CreateContainer within sandbox \"9db044fbee436787534b3d67e6a0804472c7a394185824bae07db93c9f3fdf9c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fed86de65e17a692354eb841c3f20f09a4f4823013b2202e1daf8e9873a08be\"" Mar 14 01:22:54.724934 containerd[1502]: time="2026-03-14T01:22:54.724568294Z" level=info msg="StartContainer for \"4fed86de65e17a692354eb841c3f20f09a4f4823013b2202e1daf8e9873a08be\"" Mar 14 01:22:54.772010 containerd[1502]: time="2026-03-14T01:22:54.770252434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gn9qp,Uid:476fb6b2-7d04-4c8a-9a5c-466bd99f6416,Namespace:kube-system,Attempt:0,} returns sandbox id \"45854a0249cd1c8b5644c0fe77fb2b0474e81885ca62983b5a25ca05d178fa42\"" Mar 14 01:22:54.788756 systemd[1]: Started cri-containerd-4fed86de65e17a692354eb841c3f20f09a4f4823013b2202e1daf8e9873a08be.scope - libcontainer container 4fed86de65e17a692354eb841c3f20f09a4f4823013b2202e1daf8e9873a08be. Mar 14 01:22:54.798128 containerd[1502]: time="2026-03-14T01:22:54.798059184Z" level=info msg="CreateContainer within sandbox \"45854a0249cd1c8b5644c0fe77fb2b0474e81885ca62983b5a25ca05d178fa42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 01:22:54.817853 containerd[1502]: time="2026-03-14T01:22:54.817752328Z" level=info msg="CreateContainer within sandbox \"45854a0249cd1c8b5644c0fe77fb2b0474e81885ca62983b5a25ca05d178fa42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ddca06d821cdbb5d0bad9314ea4ddb7dbb54a41009af8e362b09c26c123e946\"" Mar 14 01:22:54.819292 containerd[1502]: time="2026-03-14T01:22:54.819250711Z" level=info msg="StartContainer for \"5ddca06d821cdbb5d0bad9314ea4ddb7dbb54a41009af8e362b09c26c123e946\"" Mar 14 01:22:54.846317 containerd[1502]: time="2026-03-14T01:22:54.846242610Z" level=info msg="StartContainer for \"4fed86de65e17a692354eb841c3f20f09a4f4823013b2202e1daf8e9873a08be\" returns successfully" Mar 14 01:22:54.879809 systemd[1]: Started cri-containerd-5ddca06d821cdbb5d0bad9314ea4ddb7dbb54a41009af8e362b09c26c123e946.scope - libcontainer container 5ddca06d821cdbb5d0bad9314ea4ddb7dbb54a41009af8e362b09c26c123e946. Mar 14 01:22:54.931574 containerd[1502]: time="2026-03-14T01:22:54.930275063Z" level=info msg="StartContainer for \"5ddca06d821cdbb5d0bad9314ea4ddb7dbb54a41009af8e362b09c26c123e946\" returns successfully" Mar 14 01:22:54.969443 kubelet[2694]: I0314 01:22:54.968824 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gn9qp" podStartSLOduration=28.968804182 podStartE2EDuration="28.968804182s" podCreationTimestamp="2026-03-14 01:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 01:22:54.966804207 +0000 UTC m=+34.461572603" watchObservedRunningTime="2026-03-14 01:22:54.968804182 +0000 UTC m=+34.463572591" Mar 14 01:22:55.418770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788123795.mount: Deactivated successfully. Mar 14 01:22:55.974923 kubelet[2694]: I0314 01:22:55.974735 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cjkpn" podStartSLOduration=29.974598742 podStartE2EDuration="29.974598742s" podCreationTimestamp="2026-03-14 01:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 01:22:54.988690413 +0000 UTC m=+34.483458829" watchObservedRunningTime="2026-03-14 01:22:55.974598742 +0000 UTC m=+35.469367152" Mar 14 01:23:27.325920 systemd[1]: Started sshd@12-10.230.8.14:22-20.161.92.111:60242.service - OpenSSH per-connection server daemon (20.161.92.111:60242). Mar 14 01:23:27.919449 sshd[4097]: Accepted publickey for core from 20.161.92.111 port 60242 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:23:27.922714 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:23:27.932710 systemd-logind[1484]: New session 12 of user core. Mar 14 01:23:27.940784 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 01:23:28.907853 sshd[4097]: pam_unix(sshd:session): session closed for user core Mar 14 01:23:28.913608 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Mar 14 01:23:28.914474 systemd[1]: sshd@12-10.230.8.14:22-20.161.92.111:60242.service: Deactivated successfully. Mar 14 01:23:28.919144 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 01:23:28.921867 systemd-logind[1484]: Removed session 12. Mar 14 01:23:34.011996 systemd[1]: Started sshd@13-10.230.8.14:22-20.161.92.111:50252.service - OpenSSH per-connection server daemon (20.161.92.111:50252). Mar 14 01:23:34.574586 sshd[4112]: Accepted publickey for core from 20.161.92.111 port 50252 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:23:34.576539 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:23:34.584372 systemd-logind[1484]: New session 13 of user core. Mar 14 01:23:34.591748 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 01:23:35.073998 sshd[4112]: pam_unix(sshd:session): session closed for user core Mar 14 01:23:35.078972 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Mar 14 01:23:35.080390 systemd[1]: sshd@13-10.230.8.14:22-20.161.92.111:50252.service: Deactivated successfully. Mar 14 01:23:35.083896 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 01:23:35.086417 systemd-logind[1484]: Removed session 13. Mar 14 01:23:40.188061 systemd[1]: Started sshd@14-10.230.8.14:22-20.161.92.111:56854.service - OpenSSH per-connection server daemon (20.161.92.111:56854). Mar 14 01:23:40.769314 sshd[4126]: Accepted publickey for core from 20.161.92.111 port 56854 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:23:40.773126 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:23:40.786794 systemd-logind[1484]: New session 14 of user core. Mar 14 01:23:40.788818 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 01:23:41.267987 sshd[4126]: pam_unix(sshd:session): session closed for user core Mar 14 01:23:41.275177 systemd[1]: sshd@14-10.230.8.14:22-20.161.92.111:56854.service: Deactivated successfully. Mar 14 01:23:41.279347 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 01:23:41.280963 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Mar 14 01:23:41.282767 systemd-logind[1484]: Removed session 14. Mar 14 01:23:46.372892 systemd[1]: Started sshd@15-10.230.8.14:22-20.161.92.111:56860.service - OpenSSH per-connection server daemon (20.161.92.111:56860). Mar 14 01:23:46.927128 sshd[4140]: Accepted publickey for core from 20.161.92.111 port 56860 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:23:46.929445 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:23:46.937304 systemd-logind[1484]: New session 15 of user core. Mar 14 01:23:46.944898 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 01:23:47.428008 sshd[4140]: pam_unix(sshd:session): session closed for user core Mar 14 01:23:47.433636 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Mar 14 01:23:47.435053 systemd[1]: sshd@15-10.230.8.14:22-20.161.92.111:56860.service: Deactivated successfully. Mar 14 01:23:47.437852 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 01:23:47.440138 systemd-logind[1484]: Removed session 15. Mar 14 01:23:47.537950 systemd[1]: Started sshd@16-10.230.8.14:22-20.161.92.111:56868.service - OpenSSH per-connection server daemon (20.161.92.111:56868). Mar 14 01:23:48.095610 sshd[4154]: Accepted publickey for core from 20.161.92.111 port 56868 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:23:48.097417 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:23:48.105672 systemd-logind[1484]: New session 16 of user core. Mar 14 01:23:48.109760 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 01:23:48.653942 sshd[4154]: pam_unix(sshd:session): session closed for user core Mar 14 01:23:48.660023 systemd[1]: sshd@16-10.230.8.14:22-20.161.92.111:56868.service: Deactivated successfully. Mar 14 01:23:48.664107 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 01:23:48.665886 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Mar 14 01:23:48.667544 systemd-logind[1484]: Removed session 16. Mar 14 01:23:48.760943 systemd[1]: Started sshd@17-10.230.8.14:22-20.161.92.111:56876.service - OpenSSH per-connection server daemon (20.161.92.111:56876). Mar 14 01:23:49.326063 sshd[4164]: Accepted publickey for core from 20.161.92.111 port 56876 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:23:49.328385 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:23:49.336071 systemd-logind[1484]: New session 17 of user core. Mar 14 01:23:49.341826 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 01:23:49.804070 sshd[4164]: pam_unix(sshd:session): session closed for user core Mar 14 01:23:49.809744 systemd[1]: sshd@17-10.230.8.14:22-20.161.92.111:56876.service: Deactivated successfully. Mar 14 01:23:49.812890 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 01:23:49.814285 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Mar 14 01:23:49.816887 systemd-logind[1484]: Removed session 17. Mar 14 01:23:54.905009 systemd[1]: Started sshd@18-10.230.8.14:22-20.161.92.111:40194.service - OpenSSH per-connection server daemon (20.161.92.111:40194). Mar 14 01:23:55.477946 sshd[4178]: Accepted publickey for core from 20.161.92.111 port 40194 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:23:55.480878 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:23:55.488564 systemd-logind[1484]: New session 18 of user core. Mar 14 01:23:55.495783 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 01:23:55.958761 sshd[4178]: pam_unix(sshd:session): session closed for user core Mar 14 01:23:55.964980 systemd[1]: sshd@18-10.230.8.14:22-20.161.92.111:40194.service: Deactivated successfully. Mar 14 01:23:55.968008 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 01:23:55.969526 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Mar 14 01:23:55.971444 systemd-logind[1484]: Removed session 18. Mar 14 01:24:01.063916 systemd[1]: Started sshd@19-10.230.8.14:22-20.161.92.111:44060.service - OpenSSH per-connection server daemon (20.161.92.111:44060). Mar 14 01:24:01.613484 sshd[4193]: Accepted publickey for core from 20.161.92.111 port 44060 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:01.615774 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:01.622848 systemd-logind[1484]: New session 19 of user core. Mar 14 01:24:01.628982 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 01:24:02.102995 sshd[4193]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:02.107358 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Mar 14 01:24:02.107903 systemd[1]: sshd@19-10.230.8.14:22-20.161.92.111:44060.service: Deactivated successfully. Mar 14 01:24:02.111163 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 01:24:02.114161 systemd-logind[1484]: Removed session 19. Mar 14 01:24:02.209928 systemd[1]: Started sshd@20-10.230.8.14:22-20.161.92.111:44062.service - OpenSSH per-connection server daemon (20.161.92.111:44062). Mar 14 01:24:02.790671 sshd[4206]: Accepted publickey for core from 20.161.92.111 port 44062 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:02.791688 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:02.799092 systemd-logind[1484]: New session 20 of user core. Mar 14 01:24:02.803819 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 01:24:03.563000 sshd[4206]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:03.575498 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Mar 14 01:24:03.576275 systemd[1]: sshd@20-10.230.8.14:22-20.161.92.111:44062.service: Deactivated successfully. Mar 14 01:24:03.579540 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 01:24:03.581191 systemd-logind[1484]: Removed session 20. Mar 14 01:24:03.664968 systemd[1]: Started sshd@21-10.230.8.14:22-20.161.92.111:44078.service - OpenSSH per-connection server daemon (20.161.92.111:44078). Mar 14 01:24:04.223599 sshd[4217]: Accepted publickey for core from 20.161.92.111 port 44078 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:04.225117 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:04.233164 systemd-logind[1484]: New session 21 of user core. Mar 14 01:24:04.240798 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 01:24:05.537736 sshd[4217]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:05.547320 systemd[1]: sshd@21-10.230.8.14:22-20.161.92.111:44078.service: Deactivated successfully. Mar 14 01:24:05.550006 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 01:24:05.551657 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Mar 14 01:24:05.553533 systemd-logind[1484]: Removed session 21. Mar 14 01:24:05.642958 systemd[1]: Started sshd@22-10.230.8.14:22-20.161.92.111:44084.service - OpenSSH per-connection server daemon (20.161.92.111:44084). Mar 14 01:24:06.204400 sshd[4235]: Accepted publickey for core from 20.161.92.111 port 44084 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:06.207224 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:06.215218 systemd-logind[1484]: New session 22 of user core. Mar 14 01:24:06.220765 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 01:24:06.887037 sshd[4235]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:06.893007 systemd[1]: sshd@22-10.230.8.14:22-20.161.92.111:44084.service: Deactivated successfully. Mar 14 01:24:06.896692 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 01:24:06.898458 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Mar 14 01:24:06.900209 systemd-logind[1484]: Removed session 22. Mar 14 01:24:06.994004 systemd[1]: Started sshd@23-10.230.8.14:22-20.161.92.111:44092.service - OpenSSH per-connection server daemon (20.161.92.111:44092). Mar 14 01:24:07.577842 sshd[4246]: Accepted publickey for core from 20.161.92.111 port 44092 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:07.580105 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:07.586624 systemd-logind[1484]: New session 23 of user core. Mar 14 01:24:07.602928 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 01:24:08.063757 sshd[4246]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:08.072048 systemd[1]: sshd@23-10.230.8.14:22-20.161.92.111:44092.service: Deactivated successfully. Mar 14 01:24:08.076155 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 01:24:08.077882 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Mar 14 01:24:08.080278 systemd-logind[1484]: Removed session 23. Mar 14 01:24:13.174933 systemd[1]: Started sshd@24-10.230.8.14:22-20.161.92.111:32812.service - OpenSSH per-connection server daemon (20.161.92.111:32812). Mar 14 01:24:13.731669 sshd[4261]: Accepted publickey for core from 20.161.92.111 port 32812 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:13.733930 sshd[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:13.741467 systemd-logind[1484]: New session 24 of user core. Mar 14 01:24:13.754906 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 01:24:14.225458 sshd[4261]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:14.231952 systemd[1]: sshd@24-10.230.8.14:22-20.161.92.111:32812.service: Deactivated successfully. Mar 14 01:24:14.236220 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 01:24:14.237625 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Mar 14 01:24:14.239117 systemd-logind[1484]: Removed session 24. Mar 14 01:24:18.507964 systemd[1]: Started sshd@25-10.230.8.14:22-85.206.171.113:54128.service - OpenSSH per-connection server daemon (85.206.171.113:54128). Mar 14 01:24:18.856613 sshd[4273]: Invalid user monitor from 85.206.171.113 port 54128 Mar 14 01:24:18.910389 sshd[4273]: Received disconnect from 85.206.171.113 port 54128:11: Bye Bye [preauth] Mar 14 01:24:18.910389 sshd[4273]: Disconnected from invalid user monitor 85.206.171.113 port 54128 [preauth] Mar 14 01:24:18.913217 systemd[1]: sshd@25-10.230.8.14:22-85.206.171.113:54128.service: Deactivated successfully. Mar 14 01:24:19.332037 systemd[1]: Started sshd@26-10.230.8.14:22-20.161.92.111:32820.service - OpenSSH per-connection server daemon (20.161.92.111:32820). Mar 14 01:24:19.881100 sshd[4278]: Accepted publickey for core from 20.161.92.111 port 32820 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:19.884477 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:19.891367 systemd-logind[1484]: New session 25 of user core. Mar 14 01:24:19.902776 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 01:24:20.354930 sshd[4278]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:20.359999 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Mar 14 01:24:20.360653 systemd[1]: sshd@26-10.230.8.14:22-20.161.92.111:32820.service: Deactivated successfully. Mar 14 01:24:20.364218 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 01:24:20.365967 systemd-logind[1484]: Removed session 25. Mar 14 01:24:20.468408 systemd[1]: Started sshd@27-10.230.8.14:22-20.161.92.111:58156.service - OpenSSH per-connection server daemon (20.161.92.111:58156). Mar 14 01:24:21.019746 sshd[4291]: Accepted publickey for core from 20.161.92.111 port 58156 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:21.022026 sshd[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:21.029730 systemd-logind[1484]: New session 26 of user core. Mar 14 01:24:21.035811 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 01:24:23.001620 containerd[1502]: time="2026-03-14T01:24:23.001519481Z" level=info msg="StopContainer for \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\" with timeout 30 (s)" Mar 14 01:24:23.006058 containerd[1502]: time="2026-03-14T01:24:23.006008791Z" level=info msg="Stop container \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\" with signal terminated" Mar 14 01:24:23.036793 systemd[1]: run-containerd-runc-k8s.io-f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084-runc.ohyOzU.mount: Deactivated successfully. Mar 14 01:24:23.048518 systemd[1]: cri-containerd-15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae.scope: Deactivated successfully. Mar 14 01:24:23.076413 containerd[1502]: time="2026-03-14T01:24:23.076206037Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 01:24:23.081098 containerd[1502]: time="2026-03-14T01:24:23.081058450Z" level=info msg="StopContainer for \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\" with timeout 2 (s)" Mar 14 01:24:23.081718 containerd[1502]: time="2026-03-14T01:24:23.081684309Z" level=info msg="Stop container \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\" with signal terminated" Mar 14 01:24:23.095788 systemd-networkd[1419]: lxc_health: Link DOWN Mar 14 01:24:23.095800 systemd-networkd[1419]: lxc_health: Lost carrier Mar 14 01:24:23.110637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae-rootfs.mount: Deactivated successfully. Mar 14 01:24:23.120147 containerd[1502]: time="2026-03-14T01:24:23.118951549Z" level=info msg="shim disconnected" id=15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae namespace=k8s.io Mar 14 01:24:23.120147 containerd[1502]: time="2026-03-14T01:24:23.119048053Z" level=warning msg="cleaning up after shim disconnected" id=15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae namespace=k8s.io Mar 14 01:24:23.120147 containerd[1502]: time="2026-03-14T01:24:23.119079637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:23.128423 systemd[1]: cri-containerd-f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084.scope: Deactivated successfully. Mar 14 01:24:23.128865 systemd[1]: cri-containerd-f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084.scope: Consumed 10.558s CPU time. Mar 14 01:24:23.162739 containerd[1502]: time="2026-03-14T01:24:23.162680738Z" level=info msg="StopContainer for \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\" returns successfully" Mar 14 01:24:23.163859 containerd[1502]: time="2026-03-14T01:24:23.163826316Z" level=info msg="StopPodSandbox for \"578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f\"" Mar 14 01:24:23.165702 containerd[1502]: time="2026-03-14T01:24:23.165663113Z" level=info msg="Container to stop \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 01:24:23.171259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084-rootfs.mount: Deactivated successfully. Mar 14 01:24:23.171817 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f-shm.mount: Deactivated successfully. Mar 14 01:24:23.173746 containerd[1502]: time="2026-03-14T01:24:23.173280928Z" level=info msg="shim disconnected" id=f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084 namespace=k8s.io Mar 14 01:24:23.173746 containerd[1502]: time="2026-03-14T01:24:23.173346449Z" level=warning msg="cleaning up after shim disconnected" id=f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084 namespace=k8s.io Mar 14 01:24:23.173746 containerd[1502]: time="2026-03-14T01:24:23.173361323Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:23.188832 systemd[1]: cri-containerd-578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f.scope: Deactivated successfully. Mar 14 01:24:23.209477 containerd[1502]: time="2026-03-14T01:24:23.209330273Z" level=info msg="StopContainer for \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\" returns successfully" Mar 14 01:24:23.210161 containerd[1502]: time="2026-03-14T01:24:23.210065248Z" level=info msg="StopPodSandbox for \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\"" Mar 14 01:24:23.210161 containerd[1502]: time="2026-03-14T01:24:23.210120170Z" level=info msg="Container to stop \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 01:24:23.210161 containerd[1502]: time="2026-03-14T01:24:23.210141409Z" level=info msg="Container to stop \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 01:24:23.210161 containerd[1502]: time="2026-03-14T01:24:23.210157180Z" level=info msg="Container to stop \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 01:24:23.210397 containerd[1502]: time="2026-03-14T01:24:23.210172921Z" level=info msg="Container to stop \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 01:24:23.210397 containerd[1502]: time="2026-03-14T01:24:23.210187837Z" level=info msg="Container to stop \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 01:24:23.214183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114-shm.mount: Deactivated successfully. Mar 14 01:24:23.222206 systemd[1]: cri-containerd-042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114.scope: Deactivated successfully. Mar 14 01:24:23.239437 containerd[1502]: time="2026-03-14T01:24:23.238828889Z" level=info msg="shim disconnected" id=578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f namespace=k8s.io Mar 14 01:24:23.239437 containerd[1502]: time="2026-03-14T01:24:23.238927128Z" level=warning msg="cleaning up after shim disconnected" id=578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f namespace=k8s.io Mar 14 01:24:23.239437 containerd[1502]: time="2026-03-14T01:24:23.238950541Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:23.272764 containerd[1502]: time="2026-03-14T01:24:23.272610706Z" level=info msg="shim disconnected" id=042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114 namespace=k8s.io Mar 14 01:24:23.272764 containerd[1502]: time="2026-03-14T01:24:23.272682718Z" level=warning msg="cleaning up after shim disconnected" id=042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114 namespace=k8s.io Mar 14 01:24:23.272764 containerd[1502]: time="2026-03-14T01:24:23.272701013Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:23.280302 containerd[1502]: time="2026-03-14T01:24:23.280038816Z" level=info msg="TearDown network for sandbox \"578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f\" successfully" Mar 14 01:24:23.280302 containerd[1502]: time="2026-03-14T01:24:23.280117383Z" level=info msg="StopPodSandbox for \"578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f\" returns successfully" Mar 14 01:24:23.305474 containerd[1502]: time="2026-03-14T01:24:23.305315704Z" level=info msg="TearDown network for sandbox \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" successfully" Mar 14 01:24:23.307021 containerd[1502]: time="2026-03-14T01:24:23.306991167Z" level=info msg="StopPodSandbox for \"042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114\" returns successfully" Mar 14 01:24:23.330592 kubelet[2694]: I0314 01:24:23.330487 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9754c666-f628-44ae-a769-2a3bf6995f38-cilium-config-path\") pod \"9754c666-f628-44ae-a769-2a3bf6995f38\" (UID: \"9754c666-f628-44ae-a769-2a3bf6995f38\") " Mar 14 01:24:23.330592 kubelet[2694]: I0314 01:24:23.330600 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2n68s\" (UniqueName: \"kubernetes.io/projected/9754c666-f628-44ae-a769-2a3bf6995f38-kube-api-access-2n68s\") pod \"9754c666-f628-44ae-a769-2a3bf6995f38\" (UID: \"9754c666-f628-44ae-a769-2a3bf6995f38\") " Mar 14 01:24:23.365141 kubelet[2694]: I0314 01:24:23.359444 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9754c666-f628-44ae-a769-2a3bf6995f38-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9754c666-f628-44ae-a769-2a3bf6995f38" (UID: "9754c666-f628-44ae-a769-2a3bf6995f38"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 01:24:23.367029 kubelet[2694]: I0314 01:24:23.360215 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9754c666-f628-44ae-a769-2a3bf6995f38-kube-api-access-2n68s" (OuterVolumeSpecName: "kube-api-access-2n68s") pod "9754c666-f628-44ae-a769-2a3bf6995f38" (UID: "9754c666-f628-44ae-a769-2a3bf6995f38"). InnerVolumeSpecName "kube-api-access-2n68s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 01:24:23.432395 kubelet[2694]: I0314 01:24:23.432323 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-xtables-lock\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.432750 kubelet[2694]: I0314 01:24:23.432725 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-net\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.432898 kubelet[2694]: I0314 01:24:23.432874 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-cgroup\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433031 kubelet[2694]: I0314 01:24:23.433009 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-run\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433177 kubelet[2694]: I0314 01:24:23.433154 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-lib-modules\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433327 kubelet[2694]: I0314 01:24:23.433304 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mccfx\" (UniqueName: \"kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-kube-api-access-mccfx\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433446 kubelet[2694]: I0314 01:24:23.433424 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cni-path\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433797 kubelet[2694]: I0314 01:24:23.433588 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-kernel\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433797 kubelet[2694]: I0314 01:24:23.433629 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-config-path\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433797 kubelet[2694]: I0314 01:24:23.433655 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hostproc\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433797 kubelet[2694]: I0314 01:24:23.433685 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fff5aa8-0279-47b8-ad25-38b29d746fa1-clustermesh-secrets\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.433797 kubelet[2694]: I0314 01:24:23.433741 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hubble-tls\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.434594 kubelet[2694]: I0314 01:24:23.434105 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-etc-cni-netd\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.434594 kubelet[2694]: I0314 01:24:23.434146 2694 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-bpf-maps\") pod \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\" (UID: \"5fff5aa8-0279-47b8-ad25-38b29d746fa1\") " Mar 14 01:24:23.439325 kubelet[2694]: I0314 01:24:23.439285 2694 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2n68s\" (UniqueName: \"kubernetes.io/projected/9754c666-f628-44ae-a769-2a3bf6995f38-kube-api-access-2n68s\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.439627 kubelet[2694]: I0314 01:24:23.439479 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9754c666-f628-44ae-a769-2a3bf6995f38-cilium-config-path\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.442650 kubelet[2694]: I0314 01:24:23.432492 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.442650 kubelet[2694]: I0314 01:24:23.432793 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.442650 kubelet[2694]: I0314 01:24:23.432938 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.442650 kubelet[2694]: I0314 01:24:23.433079 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.442650 kubelet[2694]: I0314 01:24:23.433222 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.443759 kubelet[2694]: I0314 01:24:23.437172 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-kube-api-access-mccfx" (OuterVolumeSpecName: "kube-api-access-mccfx") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "kube-api-access-mccfx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 01:24:23.443759 kubelet[2694]: I0314 01:24:23.439535 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.443759 kubelet[2694]: I0314 01:24:23.439601 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hostproc" (OuterVolumeSpecName: "hostproc") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.443759 kubelet[2694]: I0314 01:24:23.442225 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 01:24:23.443759 kubelet[2694]: I0314 01:24:23.442435 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cni-path" (OuterVolumeSpecName: "cni-path") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.443998 kubelet[2694]: I0314 01:24:23.442480 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.444536 kubelet[2694]: I0314 01:24:23.444244 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fff5aa8-0279-47b8-ad25-38b29d746fa1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 01:24:23.444536 kubelet[2694]: I0314 01:24:23.444301 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 01:24:23.448596 kubelet[2694]: I0314 01:24:23.448044 2694 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5fff5aa8-0279-47b8-ad25-38b29d746fa1" (UID: "5fff5aa8-0279-47b8-ad25-38b29d746fa1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539717 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-run\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539781 2694 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-lib-modules\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539801 2694 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mccfx\" (UniqueName: \"kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-kube-api-access-mccfx\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539830 2694 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cni-path\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539846 2694 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-kernel\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539862 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-config-path\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539879 2694 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hostproc\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.539907 kubelet[2694]: I0314 01:24:23.539895 2694 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fff5aa8-0279-47b8-ad25-38b29d746fa1-clustermesh-secrets\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.540386 kubelet[2694]: I0314 01:24:23.539910 2694 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fff5aa8-0279-47b8-ad25-38b29d746fa1-hubble-tls\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.540386 kubelet[2694]: I0314 01:24:23.539925 2694 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-etc-cni-netd\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.540386 kubelet[2694]: I0314 01:24:23.539940 2694 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-bpf-maps\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.540386 kubelet[2694]: I0314 01:24:23.539956 2694 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-xtables-lock\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.540386 kubelet[2694]: I0314 01:24:23.539973 2694 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-host-proc-sys-net\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:23.540386 kubelet[2694]: I0314 01:24:23.539988 2694 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fff5aa8-0279-47b8-ad25-38b29d746fa1-cilium-cgroup\") on node \"srv-ouubu.gb1.brightbox.com\" DevicePath \"\"" Mar 14 01:24:24.022926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-578d86e8caf77f6189a41278ed4e8cd9d7fd8a0d6fa7f57ebf5785ee87f3a38f-rootfs.mount: Deactivated successfully. Mar 14 01:24:24.023075 systemd[1]: var-lib-kubelet-pods-9754c666\x2df628\x2d44ae\x2da769\x2d2a3bf6995f38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2n68s.mount: Deactivated successfully. Mar 14 01:24:24.023200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-042065d8636b30f8403b24329bd0bb529012b43da7b73f3cecc16885b361f114-rootfs.mount: Deactivated successfully. Mar 14 01:24:24.023309 systemd[1]: var-lib-kubelet-pods-5fff5aa8\x2d0279\x2d47b8\x2dad25\x2d38b29d746fa1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmccfx.mount: Deactivated successfully. Mar 14 01:24:24.023433 systemd[1]: var-lib-kubelet-pods-5fff5aa8\x2d0279\x2d47b8\x2dad25\x2d38b29d746fa1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 01:24:24.023542 systemd[1]: var-lib-kubelet-pods-5fff5aa8\x2d0279\x2d47b8\x2dad25\x2d38b29d746fa1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 01:24:24.204633 systemd[1]: Removed slice kubepods-burstable-pod5fff5aa8_0279_47b8_ad25_38b29d746fa1.slice - libcontainer container kubepods-burstable-pod5fff5aa8_0279_47b8_ad25_38b29d746fa1.slice. Mar 14 01:24:24.205030 systemd[1]: kubepods-burstable-pod5fff5aa8_0279_47b8_ad25_38b29d746fa1.slice: Consumed 10.686s CPU time. Mar 14 01:24:24.230511 kubelet[2694]: I0314 01:24:24.230303 2694 scope.go:117] "RemoveContainer" containerID="f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084" Mar 14 01:24:24.235311 containerd[1502]: time="2026-03-14T01:24:24.234699317Z" level=info msg="RemoveContainer for \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\"" Mar 14 01:24:24.245985 containerd[1502]: time="2026-03-14T01:24:24.245454773Z" level=info msg="RemoveContainer for \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\" returns successfully" Mar 14 01:24:24.247117 systemd[1]: Removed slice kubepods-besteffort-pod9754c666_f628_44ae_a769_2a3bf6995f38.slice - libcontainer container kubepods-besteffort-pod9754c666_f628_44ae_a769_2a3bf6995f38.slice. Mar 14 01:24:24.253044 kubelet[2694]: I0314 01:24:24.252854 2694 scope.go:117] "RemoveContainer" containerID="45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5" Mar 14 01:24:24.255522 containerd[1502]: time="2026-03-14T01:24:24.255478512Z" level=info msg="RemoveContainer for \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\"" Mar 14 01:24:24.276035 containerd[1502]: time="2026-03-14T01:24:24.275779998Z" level=info msg="RemoveContainer for \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\" returns successfully" Mar 14 01:24:24.277018 kubelet[2694]: I0314 01:24:24.276248 2694 scope.go:117] "RemoveContainer" containerID="a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f" Mar 14 01:24:24.278412 containerd[1502]: time="2026-03-14T01:24:24.278374418Z" level=info msg="RemoveContainer for \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\"" Mar 14 01:24:24.283728 containerd[1502]: time="2026-03-14T01:24:24.283640177Z" level=info msg="RemoveContainer for \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\" returns successfully" Mar 14 01:24:24.284120 kubelet[2694]: I0314 01:24:24.284068 2694 scope.go:117] "RemoveContainer" containerID="f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2" Mar 14 01:24:24.286470 containerd[1502]: time="2026-03-14T01:24:24.286430488Z" level=info msg="RemoveContainer for \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\"" Mar 14 01:24:24.292146 containerd[1502]: time="2026-03-14T01:24:24.291913149Z" level=info msg="RemoveContainer for \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\" returns successfully" Mar 14 01:24:24.292273 kubelet[2694]: I0314 01:24:24.292205 2694 scope.go:117] "RemoveContainer" containerID="00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24" Mar 14 01:24:24.296202 containerd[1502]: time="2026-03-14T01:24:24.296120596Z" level=info msg="RemoveContainer for \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\"" Mar 14 01:24:24.300206 containerd[1502]: time="2026-03-14T01:24:24.300170351Z" level=info msg="RemoveContainer for \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\" returns successfully" Mar 14 01:24:24.301244 kubelet[2694]: I0314 01:24:24.301198 2694 scope.go:117] "RemoveContainer" containerID="f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084" Mar 14 01:24:24.315514 containerd[1502]: time="2026-03-14T01:24:24.307187583Z" level=error msg="ContainerStatus for \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\": not found" Mar 14 01:24:24.323141 kubelet[2694]: E0314 01:24:24.323042 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\": not found" containerID="f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084" Mar 14 01:24:24.330642 kubelet[2694]: I0314 01:24:24.323128 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084"} err="failed to get container status \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\": rpc error: code = NotFound desc = an error occurred when try to find container \"f57f558166af7cd57b0f3608554dc88e0e781ffadec939872d3dd54eedf76084\": not found" Mar 14 01:24:24.331209 kubelet[2694]: I0314 01:24:24.330647 2694 scope.go:117] "RemoveContainer" containerID="45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5" Mar 14 01:24:24.331301 containerd[1502]: time="2026-03-14T01:24:24.331090786Z" level=error msg="ContainerStatus for \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\": not found" Mar 14 01:24:24.331363 kubelet[2694]: E0314 01:24:24.331305 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\": not found" containerID="45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5" Mar 14 01:24:24.331363 kubelet[2694]: I0314 01:24:24.331342 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5"} err="failed to get container status \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"45f399701a189431c2cfdbe9aff64454bd5ba1e870a7e2cc208bccf41cacb6d5\": not found" Mar 14 01:24:24.331459 kubelet[2694]: I0314 01:24:24.331368 2694 scope.go:117] "RemoveContainer" containerID="a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f" Mar 14 01:24:24.331799 containerd[1502]: time="2026-03-14T01:24:24.331762024Z" level=error msg="ContainerStatus for \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\": not found" Mar 14 01:24:24.332221 kubelet[2694]: E0314 01:24:24.331992 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\": not found" containerID="a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f" Mar 14 01:24:24.332221 kubelet[2694]: I0314 01:24:24.332037 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f"} err="failed to get container status \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"a77ca2a7133360fda9173bb36b2ff743de38a7ee1662f2a6eba781f42f431d7f\": not found" Mar 14 01:24:24.332221 kubelet[2694]: I0314 01:24:24.332070 2694 scope.go:117] "RemoveContainer" containerID="f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2" Mar 14 01:24:24.332705 containerd[1502]: time="2026-03-14T01:24:24.332597298Z" level=error msg="ContainerStatus for \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\": not found" Mar 14 01:24:24.332846 kubelet[2694]: E0314 01:24:24.332792 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\": not found" containerID="f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2" Mar 14 01:24:24.332929 kubelet[2694]: I0314 01:24:24.332853 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2"} err="failed to get container status \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6eef982edb89dc6805e713d7a0f7fcd5fbbeda0f2a01497cfa1e083a95a00b2\": not found" Mar 14 01:24:24.332929 kubelet[2694]: I0314 01:24:24.332883 2694 scope.go:117] "RemoveContainer" containerID="00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24" Mar 14 01:24:24.333187 containerd[1502]: time="2026-03-14T01:24:24.333123568Z" level=error msg="ContainerStatus for \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\": not found" Mar 14 01:24:24.333323 kubelet[2694]: E0314 01:24:24.333292 2694 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\": not found" containerID="00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24" Mar 14 01:24:24.333394 kubelet[2694]: I0314 01:24:24.333328 2694 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24"} err="failed to get container status \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\": rpc error: code = NotFound desc = an error occurred when try to find container \"00a9eef4c31c0e17063007f8aeb8382654a139e8554c9c38f987109d0d460c24\": not found" Mar 14 01:24:24.333394 kubelet[2694]: I0314 01:24:24.333350 2694 scope.go:117] "RemoveContainer" containerID="15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae" Mar 14 01:24:24.335171 containerd[1502]: time="2026-03-14T01:24:24.335126116Z" level=info msg="RemoveContainer for \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\"" Mar 14 01:24:24.341032 containerd[1502]: time="2026-03-14T01:24:24.340976121Z" level=info msg="RemoveContainer for \"15bf3f4bddec4fd240a201e81743028345301ac2ac9c7b6c62b44b01a07939ae\" returns successfully" Mar 14 01:24:24.760153 kubelet[2694]: I0314 01:24:24.760067 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fff5aa8-0279-47b8-ad25-38b29d746fa1" path="/var/lib/kubelet/pods/5fff5aa8-0279-47b8-ad25-38b29d746fa1/volumes" Mar 14 01:24:24.780039 kubelet[2694]: I0314 01:24:24.779982 2694 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9754c666-f628-44ae-a769-2a3bf6995f38" path="/var/lib/kubelet/pods/9754c666-f628-44ae-a769-2a3bf6995f38/volumes" Mar 14 01:24:24.996337 sshd[4291]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:25.004206 systemd[1]: sshd@27-10.230.8.14:22-20.161.92.111:58156.service: Deactivated successfully. Mar 14 01:24:25.007338 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 01:24:25.010017 systemd-logind[1484]: Session 26 logged out. Waiting for processes to exit. Mar 14 01:24:25.011739 systemd-logind[1484]: Removed session 26. Mar 14 01:24:25.100955 systemd[1]: Started sshd@28-10.230.8.14:22-20.161.92.111:58166.service - OpenSSH per-connection server daemon (20.161.92.111:58166). Mar 14 01:24:25.666587 sshd[4455]: Accepted publickey for core from 20.161.92.111 port 58166 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:25.668897 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:25.677797 systemd-logind[1484]: New session 27 of user core. Mar 14 01:24:25.687810 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 01:24:25.932642 kubelet[2694]: E0314 01:24:25.932541 2694 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 01:24:26.766426 kubelet[2694]: I0314 01:24:26.766380 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-cilium-cgroup\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.767833 kubelet[2694]: I0314 01:24:26.767804 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-etc-cni-netd\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.767980 kubelet[2694]: I0314 01:24:26.767955 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h866r\" (UniqueName: \"kubernetes.io/projected/9bf54d27-353a-462a-a823-a4c97ea27a48-kube-api-access-h866r\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.768631 kubelet[2694]: I0314 01:24:26.768606 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-bpf-maps\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.768861 kubelet[2694]: I0314 01:24:26.768833 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-cilium-run\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.769074 kubelet[2694]: I0314 01:24:26.769048 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-host-proc-sys-net\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.769213 kubelet[2694]: I0314 01:24:26.769185 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-cni-path\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.769326 kubelet[2694]: I0314 01:24:26.769304 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-xtables-lock\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.771511 kubelet[2694]: I0314 01:24:26.769440 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bf54d27-353a-462a-a823-a4c97ea27a48-clustermesh-secrets\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.771511 kubelet[2694]: I0314 01:24:26.769478 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bf54d27-353a-462a-a823-a4c97ea27a48-cilium-config-path\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.771511 kubelet[2694]: I0314 01:24:26.769523 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bf54d27-353a-462a-a823-a4c97ea27a48-hubble-tls\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.771511 kubelet[2694]: I0314 01:24:26.769580 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-hostproc\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.771511 kubelet[2694]: I0314 01:24:26.769610 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9bf54d27-353a-462a-a823-a4c97ea27a48-cilium-ipsec-secrets\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.771511 kubelet[2694]: I0314 01:24:26.769637 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-lib-modules\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.771835 kubelet[2694]: I0314 01:24:26.769662 2694 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bf54d27-353a-462a-a823-a4c97ea27a48-host-proc-sys-kernel\") pod \"cilium-r26h5\" (UID: \"9bf54d27-353a-462a-a823-a4c97ea27a48\") " pod="kube-system/cilium-r26h5" Mar 14 01:24:26.781948 systemd[1]: Created slice kubepods-burstable-pod9bf54d27_353a_462a_a823_a4c97ea27a48.slice - libcontainer container kubepods-burstable-pod9bf54d27_353a_462a_a823_a4c97ea27a48.slice. Mar 14 01:24:26.783939 sshd[4455]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:26.795659 systemd[1]: sshd@28-10.230.8.14:22-20.161.92.111:58166.service: Deactivated successfully. Mar 14 01:24:26.799639 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 01:24:26.803585 systemd-logind[1484]: Session 27 logged out. Waiting for processes to exit. Mar 14 01:24:26.809055 systemd-logind[1484]: Removed session 27. Mar 14 01:24:26.894956 systemd[1]: Started sshd@29-10.230.8.14:22-20.161.92.111:58180.service - OpenSSH per-connection server daemon (20.161.92.111:58180). Mar 14 01:24:27.109742 containerd[1502]: time="2026-03-14T01:24:27.108700183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r26h5,Uid:9bf54d27-353a-462a-a823-a4c97ea27a48,Namespace:kube-system,Attempt:0,}" Mar 14 01:24:27.138939 containerd[1502]: time="2026-03-14T01:24:27.138673587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 01:24:27.140084 containerd[1502]: time="2026-03-14T01:24:27.138872561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 01:24:27.140215 containerd[1502]: time="2026-03-14T01:24:27.140073095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:24:27.140392 containerd[1502]: time="2026-03-14T01:24:27.140213847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 01:24:27.168085 systemd[1]: Started cri-containerd-9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34.scope - libcontainer container 9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34. Mar 14 01:24:27.206585 containerd[1502]: time="2026-03-14T01:24:27.206382916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r26h5,Uid:9bf54d27-353a-462a-a823-a4c97ea27a48,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\"" Mar 14 01:24:27.219982 containerd[1502]: time="2026-03-14T01:24:27.219355654Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 01:24:27.251049 containerd[1502]: time="2026-03-14T01:24:27.250975589Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d\"" Mar 14 01:24:27.254448 containerd[1502]: time="2026-03-14T01:24:27.251730021Z" level=info msg="StartContainer for \"3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d\"" Mar 14 01:24:27.297890 systemd[1]: Started cri-containerd-3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d.scope - libcontainer container 3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d. Mar 14 01:24:27.341335 containerd[1502]: time="2026-03-14T01:24:27.340462472Z" level=info msg="StartContainer for \"3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d\" returns successfully" Mar 14 01:24:27.365432 systemd[1]: cri-containerd-3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d.scope: Deactivated successfully. Mar 14 01:24:27.421742 containerd[1502]: time="2026-03-14T01:24:27.421422290Z" level=info msg="shim disconnected" id=3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d namespace=k8s.io Mar 14 01:24:27.421742 containerd[1502]: time="2026-03-14T01:24:27.421577216Z" level=warning msg="cleaning up after shim disconnected" id=3afc8b1c4eaaf89f4a6a2edfdfa6254838d1649b60f5e193862951079b12aa0d namespace=k8s.io Mar 14 01:24:27.421742 containerd[1502]: time="2026-03-14T01:24:27.421598066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:27.470122 sshd[4469]: Accepted publickey for core from 20.161.92.111 port 58180 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:27.472324 sshd[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:27.480629 systemd-logind[1484]: New session 28 of user core. Mar 14 01:24:27.485774 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 14 01:24:27.863904 sshd[4469]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:27.869918 systemd-logind[1484]: Session 28 logged out. Waiting for processes to exit. Mar 14 01:24:27.870763 systemd[1]: sshd@29-10.230.8.14:22-20.161.92.111:58180.service: Deactivated successfully. Mar 14 01:24:27.874137 systemd[1]: session-28.scope: Deactivated successfully. Mar 14 01:24:27.875612 systemd-logind[1484]: Removed session 28. Mar 14 01:24:27.970930 systemd[1]: Started sshd@30-10.230.8.14:22-20.161.92.111:58188.service - OpenSSH per-connection server daemon (20.161.92.111:58188). Mar 14 01:24:28.263312 containerd[1502]: time="2026-03-14T01:24:28.262710333Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 01:24:28.285590 containerd[1502]: time="2026-03-14T01:24:28.285450922Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994\"" Mar 14 01:24:28.287596 containerd[1502]: time="2026-03-14T01:24:28.286737382Z" level=info msg="StartContainer for \"6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994\"" Mar 14 01:24:28.337815 systemd[1]: Started cri-containerd-6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994.scope - libcontainer container 6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994. Mar 14 01:24:28.376262 containerd[1502]: time="2026-03-14T01:24:28.376124592Z" level=info msg="StartContainer for \"6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994\" returns successfully" Mar 14 01:24:28.390036 systemd[1]: cri-containerd-6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994.scope: Deactivated successfully. Mar 14 01:24:28.420169 containerd[1502]: time="2026-03-14T01:24:28.419959527Z" level=info msg="shim disconnected" id=6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994 namespace=k8s.io Mar 14 01:24:28.420169 containerd[1502]: time="2026-03-14T01:24:28.420090289Z" level=warning msg="cleaning up after shim disconnected" id=6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994 namespace=k8s.io Mar 14 01:24:28.420169 containerd[1502]: time="2026-03-14T01:24:28.420106418Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:28.535494 sshd[4585]: Accepted publickey for core from 20.161.92.111 port 58188 ssh2: RSA SHA256:G3DxPtudQCSC+zb3xt9jRLB1yvq/SeDG59+4Mc6l5RQ Mar 14 01:24:28.536106 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 01:24:28.543305 systemd-logind[1484]: New session 29 of user core. Mar 14 01:24:28.549895 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 14 01:24:28.882334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6752adde253975417c75cdd863a4dbf2c8e91a9a47e44a79027ce72131b13994-rootfs.mount: Deactivated successfully. Mar 14 01:24:29.264879 containerd[1502]: time="2026-03-14T01:24:29.264684319Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 01:24:29.300392 containerd[1502]: time="2026-03-14T01:24:29.299815060Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d\"" Mar 14 01:24:29.303024 containerd[1502]: time="2026-03-14T01:24:29.302632221Z" level=info msg="StartContainer for \"09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d\"" Mar 14 01:24:29.351788 systemd[1]: Started cri-containerd-09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d.scope - libcontainer container 09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d. Mar 14 01:24:29.395023 containerd[1502]: time="2026-03-14T01:24:29.394972243Z" level=info msg="StartContainer for \"09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d\" returns successfully" Mar 14 01:24:29.404171 systemd[1]: cri-containerd-09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d.scope: Deactivated successfully. Mar 14 01:24:29.444589 containerd[1502]: time="2026-03-14T01:24:29.444442863Z" level=info msg="shim disconnected" id=09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d namespace=k8s.io Mar 14 01:24:29.444589 containerd[1502]: time="2026-03-14T01:24:29.444526536Z" level=warning msg="cleaning up after shim disconnected" id=09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d namespace=k8s.io Mar 14 01:24:29.444589 containerd[1502]: time="2026-03-14T01:24:29.444543458Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:29.882146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09882bba5ae7114097a6f9732e4a4db0d074a40b1fc4e438438977201c85055d-rootfs.mount: Deactivated successfully. Mar 14 01:24:30.280527 containerd[1502]: time="2026-03-14T01:24:30.279885395Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 01:24:30.319482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3974938698.mount: Deactivated successfully. Mar 14 01:24:30.323828 containerd[1502]: time="2026-03-14T01:24:30.323768964Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1\"" Mar 14 01:24:30.325926 containerd[1502]: time="2026-03-14T01:24:30.325881563Z" level=info msg="StartContainer for \"344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1\"" Mar 14 01:24:30.370767 systemd[1]: Started cri-containerd-344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1.scope - libcontainer container 344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1. Mar 14 01:24:30.413747 containerd[1502]: time="2026-03-14T01:24:30.413589489Z" level=info msg="StartContainer for \"344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1\" returns successfully" Mar 14 01:24:30.417498 systemd[1]: cri-containerd-344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1.scope: Deactivated successfully. Mar 14 01:24:30.451624 containerd[1502]: time="2026-03-14T01:24:30.451163612Z" level=info msg="shim disconnected" id=344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1 namespace=k8s.io Mar 14 01:24:30.451624 containerd[1502]: time="2026-03-14T01:24:30.451235703Z" level=warning msg="cleaning up after shim disconnected" id=344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1 namespace=k8s.io Mar 14 01:24:30.451624 containerd[1502]: time="2026-03-14T01:24:30.451253247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 01:24:30.882525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-344617e11ba8fe27283cbac9532f5ba64631d00139e62577ec71ce0616ac92e1-rootfs.mount: Deactivated successfully. Mar 14 01:24:30.934143 kubelet[2694]: E0314 01:24:30.934078 2694 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 01:24:31.276911 containerd[1502]: time="2026-03-14T01:24:31.276643615Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 01:24:31.299496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196168541.mount: Deactivated successfully. Mar 14 01:24:31.307929 containerd[1502]: time="2026-03-14T01:24:31.307275133Z" level=info msg="CreateContainer within sandbox \"9b9734a297b6ba5d102ebd26c66e8920af829d3fb892c031d64b875238ccba34\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e6dda787a0dc6927acefd85b9a1a31cba7108794362af0c1426bd9fc9235ff7c\"" Mar 14 01:24:31.310774 containerd[1502]: time="2026-03-14T01:24:31.309994690Z" level=info msg="StartContainer for \"e6dda787a0dc6927acefd85b9a1a31cba7108794362af0c1426bd9fc9235ff7c\"" Mar 14 01:24:31.355759 systemd[1]: Started cri-containerd-e6dda787a0dc6927acefd85b9a1a31cba7108794362af0c1426bd9fc9235ff7c.scope - libcontainer container e6dda787a0dc6927acefd85b9a1a31cba7108794362af0c1426bd9fc9235ff7c. Mar 14 01:24:31.397593 containerd[1502]: time="2026-03-14T01:24:31.397490095Z" level=info msg="StartContainer for \"e6dda787a0dc6927acefd85b9a1a31cba7108794362af0c1426bd9fc9235ff7c\" returns successfully" Mar 14 01:24:31.833860 systemd[1]: Started sshd@31-10.230.8.14:22-77.87.40.114:59924.service - OpenSSH per-connection server daemon (77.87.40.114:59924). Mar 14 01:24:32.168315 sshd[4839]: Invalid user prueba from 77.87.40.114 port 59924 Mar 14 01:24:32.227635 sshd[4839]: Received disconnect from 77.87.40.114 port 59924:11: Bye Bye [preauth] Mar 14 01:24:32.227635 sshd[4839]: Disconnected from invalid user prueba 77.87.40.114 port 59924 [preauth] Mar 14 01:24:32.230384 systemd[1]: sshd@31-10.230.8.14:22-77.87.40.114:59924.service: Deactivated successfully. Mar 14 01:24:32.285480 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 01:24:32.343110 kubelet[2694]: I0314 01:24:32.341757 2694 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r26h5" podStartSLOduration=6.34166574 podStartE2EDuration="6.34166574s" podCreationTimestamp="2026-03-14 01:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 01:24:32.339878566 +0000 UTC m=+131.834646966" watchObservedRunningTime="2026-03-14 01:24:32.34166574 +0000 UTC m=+131.836434128" Mar 14 01:24:33.419283 kubelet[2694]: I0314 01:24:33.419208 2694 setters.go:618] "Node became not ready" node="srv-ouubu.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T01:24:33Z","lastTransitionTime":"2026-03-14T01:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 01:24:35.645811 kubelet[2694]: E0314 01:24:35.645536 2694 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:45702->127.0.0.1:36293: write tcp 127.0.0.1:45702->127.0.0.1:36293: write: connection reset by peer Mar 14 01:24:36.164084 systemd-networkd[1419]: lxc_health: Link UP Mar 14 01:24:36.177445 systemd-networkd[1419]: lxc_health: Gained carrier Mar 14 01:24:37.903911 systemd-networkd[1419]: lxc_health: Gained IPv6LL Mar 14 01:24:40.068927 systemd[1]: run-containerd-runc-k8s.io-e6dda787a0dc6927acefd85b9a1a31cba7108794362af0c1426bd9fc9235ff7c-runc.9h4bgB.mount: Deactivated successfully. Mar 14 01:24:42.451154 sshd[4585]: pam_unix(sshd:session): session closed for user core Mar 14 01:24:42.459236 systemd-logind[1484]: Session 29 logged out. Waiting for processes to exit. Mar 14 01:24:42.460997 systemd[1]: sshd@30-10.230.8.14:22-20.161.92.111:58188.service: Deactivated successfully. Mar 14 01:24:42.468458 systemd[1]: session-29.scope: Deactivated successfully. Mar 14 01:24:42.471999 systemd-logind[1484]: Removed session 29.