Mar 4 02:12:19.067624 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 02:12:19.067671 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 02:12:19.067687 kernel: BIOS-provided physical RAM map: Mar 4 02:12:19.067704 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 4 02:12:19.067714 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 4 02:12:19.067724 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 4 02:12:19.067736 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 4 02:12:19.067747 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 4 02:12:19.067757 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 02:12:19.067768 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 4 02:12:19.067778 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 4 02:12:19.067789 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 4 02:12:19.067815 kernel: NX (Execute Disable) protection: active Mar 4 02:12:19.067827 kernel: APIC: Static calls initialized Mar 4 02:12:19.067840 kernel: SMBIOS 2.8 present. Mar 4 02:12:19.067857 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 4 02:12:19.067870 kernel: Hypervisor detected: KVM Mar 4 02:12:19.067887 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 02:12:19.067902 kernel: kvm-clock: using sched offset of 5038878548 cycles Mar 4 02:12:19.067914 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 02:12:19.067926 kernel: tsc: Detected 2499.998 MHz processor Mar 4 02:12:19.067938 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 02:12:19.067950 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 02:12:19.067961 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 4 02:12:19.067973 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 4 02:12:19.067986 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 02:12:19.068002 kernel: Using GB pages for direct mapping Mar 4 02:12:19.068014 kernel: ACPI: Early table checksum verification disabled Mar 4 02:12:19.068026 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 4 02:12:19.068037 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:12:19.068051 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:12:19.068062 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:12:19.068074 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 4 02:12:19.068085 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:12:19.068096 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:12:19.068120 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:12:19.068131 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:12:19.068143 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 4 02:12:19.068154 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 4 02:12:19.068166 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 4 02:12:19.068208 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 4 02:12:19.068219 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 4 02:12:19.068236 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 4 02:12:19.068260 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 4 02:12:19.068277 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 4 02:12:19.068289 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 4 02:12:19.068313 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 4 02:12:19.068325 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 4 02:12:19.068336 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 4 02:12:19.068359 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 4 02:12:19.068384 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 4 02:12:19.068396 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 4 02:12:19.068407 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 4 02:12:19.068419 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 4 02:12:19.068431 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 4 02:12:19.068443 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 4 02:12:19.068455 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 4 02:12:19.068466 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 4 02:12:19.068484 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 4 02:12:19.068502 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 4 02:12:19.068515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 4 02:12:19.068537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 4 02:12:19.068552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 4 02:12:19.068564 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 4 02:12:19.070692 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 4 02:12:19.070709 kernel: Zone ranges: Mar 4 02:12:19.070722 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 02:12:19.070735 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 4 02:12:19.070756 kernel: Normal empty Mar 4 02:12:19.070768 kernel: Movable zone start for each node Mar 4 02:12:19.070781 kernel: Early memory node ranges Mar 4 02:12:19.070793 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 4 02:12:19.070804 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 4 02:12:19.070817 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 4 02:12:19.070829 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 02:12:19.070841 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 4 02:12:19.070862 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 4 02:12:19.070876 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 02:12:19.070894 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 02:12:19.070907 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 02:12:19.070919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 02:12:19.070931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 02:12:19.070943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 02:12:19.070955 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 02:12:19.070967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 02:12:19.070979 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 02:12:19.070992 kernel: TSC deadline timer available Mar 4 02:12:19.071010 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 4 02:12:19.071022 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 02:12:19.071034 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 4 02:12:19.071047 kernel: Booting paravirtualized kernel on KVM Mar 4 02:12:19.071059 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 02:12:19.071071 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 4 02:12:19.071083 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 4 02:12:19.071095 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 4 02:12:19.071107 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 4 02:12:19.071125 kernel: kvm-guest: PV spinlocks enabled Mar 4 02:12:19.071137 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 02:12:19.071151 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 02:12:19.071163 kernel: random: crng init done Mar 4 02:12:19.071175 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 02:12:19.071187 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 4 02:12:19.071207 kernel: Fallback order for Node 0: 0 Mar 4 02:12:19.071219 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 4 02:12:19.071236 kernel: Policy zone: DMA32 Mar 4 02:12:19.071255 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 02:12:19.071268 kernel: software IO TLB: area num 16. Mar 4 02:12:19.071281 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194760K reserved, 0K cma-reserved) Mar 4 02:12:19.071293 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 4 02:12:19.071305 kernel: Kernel/User page tables isolation: enabled Mar 4 02:12:19.071318 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 02:12:19.071330 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 02:12:19.071342 kernel: Dynamic Preempt: voluntary Mar 4 02:12:19.071360 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 02:12:19.071373 kernel: rcu: RCU event tracing is enabled. Mar 4 02:12:19.071385 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 4 02:12:19.071397 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 02:12:19.071410 kernel: Rude variant of Tasks RCU enabled. Mar 4 02:12:19.071435 kernel: Tracing variant of Tasks RCU enabled. Mar 4 02:12:19.071452 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 02:12:19.071465 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 4 02:12:19.071478 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 4 02:12:19.071490 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 02:12:19.071503 kernel: Console: colour VGA+ 80x25 Mar 4 02:12:19.071521 kernel: printk: console [tty0] enabled Mar 4 02:12:19.071546 kernel: printk: console [ttyS0] enabled Mar 4 02:12:19.071559 kernel: ACPI: Core revision 20230628 Mar 4 02:12:19.073082 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 02:12:19.073103 kernel: x2apic enabled Mar 4 02:12:19.073125 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 02:12:19.073139 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 4 02:12:19.073160 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 4 02:12:19.073174 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 02:12:19.073187 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 4 02:12:19.073200 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 4 02:12:19.073213 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 02:12:19.073226 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 02:12:19.073238 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 02:12:19.073251 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 4 02:12:19.073277 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 4 02:12:19.073291 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 4 02:12:19.073304 kernel: MDS: Mitigation: Clear CPU buffers Mar 4 02:12:19.073316 kernel: MMIO Stale Data: Unknown: No mitigations Mar 4 02:12:19.073329 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 4 02:12:19.073341 kernel: active return thunk: its_return_thunk Mar 4 02:12:19.073354 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 4 02:12:19.073366 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 02:12:19.073379 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 02:12:19.073392 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 02:12:19.073404 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 02:12:19.073423 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 4 02:12:19.073442 kernel: Freeing SMP alternatives memory: 32K Mar 4 02:12:19.073456 kernel: pid_max: default: 32768 minimum: 301 Mar 4 02:12:19.073468 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 02:12:19.073481 kernel: landlock: Up and running. Mar 4 02:12:19.073493 kernel: SELinux: Initializing. Mar 4 02:12:19.073506 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 4 02:12:19.073519 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 4 02:12:19.073542 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 4 02:12:19.073556 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 02:12:19.073569 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 02:12:19.073701 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 02:12:19.073715 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 4 02:12:19.073728 kernel: signal: max sigframe size: 1776 Mar 4 02:12:19.073741 kernel: rcu: Hierarchical SRCU implementation. Mar 4 02:12:19.073754 kernel: rcu: Max phase no-delay instances is 400. Mar 4 02:12:19.073767 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 02:12:19.073780 kernel: smp: Bringing up secondary CPUs ... Mar 4 02:12:19.073792 kernel: smpboot: x86: Booting SMP configuration: Mar 4 02:12:19.073805 kernel: .... node #0, CPUs: #1 Mar 4 02:12:19.073832 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 4 02:12:19.073845 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 02:12:19.073858 kernel: smpboot: Max logical packages: 16 Mar 4 02:12:19.073871 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 4 02:12:19.073883 kernel: devtmpfs: initialized Mar 4 02:12:19.073896 kernel: x86/mm: Memory block size: 128MB Mar 4 02:12:19.073909 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 02:12:19.073922 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 4 02:12:19.073934 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 02:12:19.073955 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 02:12:19.073968 kernel: audit: initializing netlink subsys (disabled) Mar 4 02:12:19.073996 kernel: audit: type=2000 audit(1772590337.798:1): state=initialized audit_enabled=0 res=1 Mar 4 02:12:19.074020 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 02:12:19.074144 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 02:12:19.074237 kernel: cpuidle: using governor menu Mar 4 02:12:19.074412 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 02:12:19.074426 kernel: dca service started, version 1.12.1 Mar 4 02:12:19.074539 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 02:12:19.074682 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 02:12:19.074826 kernel: PCI: Using configuration type 1 for base access Mar 4 02:12:19.074903 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 02:12:19.074972 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 02:12:19.074987 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 02:12:19.075011 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 02:12:19.075041 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 02:12:19.075082 kernel: ACPI: Added _OSI(Module Device) Mar 4 02:12:19.075134 kernel: ACPI: Added _OSI(Processor Device) Mar 4 02:12:19.075225 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 02:12:19.075293 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 02:12:19.075348 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 02:12:19.075466 kernel: ACPI: Interpreter enabled Mar 4 02:12:19.077652 kernel: ACPI: PM: (supports S0 S5) Mar 4 02:12:19.077740 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 02:12:19.077757 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 02:12:19.077798 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 02:12:19.077838 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 02:12:19.077938 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 02:12:19.079354 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 02:12:19.082227 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 4 02:12:19.085054 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 4 02:12:19.085218 kernel: PCI host bridge to bus 0000:00 Mar 4 02:12:19.088331 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 02:12:19.091772 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 02:12:19.092923 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 02:12:19.093554 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 4 02:12:19.096517 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 02:12:19.099688 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 4 02:12:19.100896 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 02:12:19.104832 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 02:12:19.105322 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 4 02:12:19.109071 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 4 02:12:19.112879 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 4 02:12:19.116354 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 4 02:12:19.116982 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 02:12:19.120649 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.124416 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 4 02:12:19.124880 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.128272 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 4 02:12:19.129156 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.134211 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 4 02:12:19.135793 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.136037 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 4 02:12:19.136286 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.136507 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 4 02:12:19.140397 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.140635 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 4 02:12:19.140850 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.141041 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 4 02:12:19.141283 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 4 02:12:19.141473 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 4 02:12:19.141727 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 4 02:12:19.141917 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 4 02:12:19.142113 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 4 02:12:19.142300 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 4 02:12:19.142497 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 4 02:12:19.144771 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 4 02:12:19.144967 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 4 02:12:19.145153 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 4 02:12:19.145337 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 4 02:12:19.145557 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 02:12:19.147803 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 02:12:19.148007 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 02:12:19.148213 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 4 02:12:19.148444 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 4 02:12:19.148918 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 02:12:19.149120 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 4 02:12:19.149362 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 4 02:12:19.149625 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 4 02:12:19.149827 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 4 02:12:19.150012 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 4 02:12:19.150197 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 02:12:19.150397 kernel: pci_bus 0000:02: extended config space not accessible Mar 4 02:12:19.152703 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 4 02:12:19.152910 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 4 02:12:19.153146 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 4 02:12:19.153349 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 4 02:12:19.153588 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 4 02:12:19.153787 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 4 02:12:19.153986 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 4 02:12:19.154192 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 4 02:12:19.154376 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 02:12:19.156646 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 4 02:12:19.156847 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 4 02:12:19.157039 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 4 02:12:19.157226 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 4 02:12:19.157413 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 02:12:19.159658 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 4 02:12:19.159848 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 4 02:12:19.160032 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 02:12:19.160231 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 4 02:12:19.160417 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 4 02:12:19.160638 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 02:12:19.160826 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 4 02:12:19.161011 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 4 02:12:19.161194 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 02:12:19.161381 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 4 02:12:19.161591 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 4 02:12:19.161788 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 02:12:19.161976 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 4 02:12:19.162160 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 4 02:12:19.162368 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 02:12:19.162388 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 02:12:19.162402 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 02:12:19.162415 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 02:12:19.162428 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 02:12:19.162449 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 02:12:19.162463 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 02:12:19.162476 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 02:12:19.162489 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 02:12:19.162502 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 02:12:19.162515 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 02:12:19.162538 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 02:12:19.162552 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 02:12:19.162566 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 02:12:19.164616 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 02:12:19.164631 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 02:12:19.164644 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 02:12:19.164657 kernel: iommu: Default domain type: Translated Mar 4 02:12:19.164671 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 02:12:19.164684 kernel: PCI: Using ACPI for IRQ routing Mar 4 02:12:19.164697 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 02:12:19.164710 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 4 02:12:19.164723 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 4 02:12:19.164933 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 02:12:19.165136 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 02:12:19.165332 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 02:12:19.165353 kernel: vgaarb: loaded Mar 4 02:12:19.165366 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 02:12:19.165379 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 02:12:19.165393 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 02:12:19.165406 kernel: pnp: PnP ACPI init Mar 4 02:12:19.165671 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 02:12:19.165701 kernel: pnp: PnP ACPI: found 5 devices Mar 4 02:12:19.165715 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 02:12:19.165728 kernel: NET: Registered PF_INET protocol family Mar 4 02:12:19.165741 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 02:12:19.165755 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 4 02:12:19.165768 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 02:12:19.165781 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 4 02:12:19.165794 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 4 02:12:19.165813 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 4 02:12:19.165826 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 4 02:12:19.165839 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 4 02:12:19.165852 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 02:12:19.165865 kernel: NET: Registered PF_XDP protocol family Mar 4 02:12:19.166060 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 4 02:12:19.166289 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 4 02:12:19.166477 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 4 02:12:19.168739 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 4 02:12:19.168928 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 4 02:12:19.169112 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 4 02:12:19.169306 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 4 02:12:19.169511 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 4 02:12:19.169742 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 4 02:12:19.169930 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 4 02:12:19.170114 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 4 02:12:19.170297 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 4 02:12:19.170482 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 4 02:12:19.172738 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 4 02:12:19.172926 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 4 02:12:19.173110 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 4 02:12:19.173311 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 4 02:12:19.173588 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 4 02:12:19.173780 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 4 02:12:19.173963 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 4 02:12:19.174148 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 4 02:12:19.174340 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 02:12:19.174523 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 4 02:12:19.176771 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 4 02:12:19.176959 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 4 02:12:19.177153 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 02:12:19.177338 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 4 02:12:19.177651 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 4 02:12:19.177950 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 4 02:12:19.178179 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 02:12:19.178377 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 4 02:12:19.180658 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 4 02:12:19.180925 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 4 02:12:19.181115 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 02:12:19.181302 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 4 02:12:19.181486 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 4 02:12:19.181729 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 4 02:12:19.181917 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 02:12:19.182101 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 4 02:12:19.182286 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 4 02:12:19.182482 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 4 02:12:19.182698 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 02:12:19.182889 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 4 02:12:19.183077 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 4 02:12:19.183264 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 4 02:12:19.183467 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 02:12:19.183696 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 4 02:12:19.183886 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 4 02:12:19.184076 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 4 02:12:19.184264 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 02:12:19.184447 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 02:12:19.184658 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 02:12:19.184828 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 02:12:19.185013 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 4 02:12:19.185205 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 02:12:19.185375 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 4 02:12:19.185682 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 4 02:12:19.185873 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 4 02:12:19.186048 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 02:12:19.186250 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 4 02:12:19.186451 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 4 02:12:19.186722 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 4 02:12:19.186903 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 02:12:19.187100 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 4 02:12:19.187278 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 4 02:12:19.187455 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 02:12:19.187671 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 4 02:12:19.187860 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 4 02:12:19.188036 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 02:12:19.188233 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 4 02:12:19.188411 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 4 02:12:19.188632 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 02:12:19.188844 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 4 02:12:19.189023 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 4 02:12:19.189209 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 02:12:19.189404 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 4 02:12:19.189614 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 4 02:12:19.189823 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 02:12:19.190056 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 4 02:12:19.190237 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 4 02:12:19.190414 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 02:12:19.190444 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 02:12:19.190458 kernel: PCI: CLS 0 bytes, default 64 Mar 4 02:12:19.190472 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 4 02:12:19.190486 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 4 02:12:19.190500 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 4 02:12:19.190515 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 4 02:12:19.190539 kernel: Initialise system trusted keyrings Mar 4 02:12:19.190554 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 4 02:12:19.190618 kernel: Key type asymmetric registered Mar 4 02:12:19.190642 kernel: Asymmetric key parser 'x509' registered Mar 4 02:12:19.190656 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 02:12:19.190669 kernel: io scheduler mq-deadline registered Mar 4 02:12:19.190683 kernel: io scheduler kyber registered Mar 4 02:12:19.190696 kernel: io scheduler bfq registered Mar 4 02:12:19.190885 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 4 02:12:19.191073 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 4 02:12:19.191260 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.191457 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 4 02:12:19.191686 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 4 02:12:19.191874 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.192071 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 4 02:12:19.192269 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 4 02:12:19.192454 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.192688 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 4 02:12:19.192874 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 4 02:12:19.193059 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.193267 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 4 02:12:19.193451 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 4 02:12:19.193674 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.193870 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 4 02:12:19.194056 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 4 02:12:19.194265 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.194461 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 4 02:12:19.194684 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 4 02:12:19.194873 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.195070 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 4 02:12:19.195257 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 4 02:12:19.195443 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:12:19.195465 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 02:12:19.195480 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 02:12:19.195494 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 02:12:19.195516 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 02:12:19.195541 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 02:12:19.195556 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 02:12:19.195569 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 02:12:19.195644 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 02:12:19.195922 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 4 02:12:19.195946 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 02:12:19.196120 kernel: rtc_cmos 00:03: registered as rtc0 Mar 4 02:12:19.196308 kernel: rtc_cmos 00:03: setting system clock to 2026-03-04T02:12:18 UTC (1772590338) Mar 4 02:12:19.196484 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 4 02:12:19.196513 kernel: intel_pstate: CPU model not supported Mar 4 02:12:19.196538 kernel: NET: Registered PF_INET6 protocol family Mar 4 02:12:19.196553 kernel: Segment Routing with IPv6 Mar 4 02:12:19.196587 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 02:12:19.196602 kernel: NET: Registered PF_PACKET protocol family Mar 4 02:12:19.196616 kernel: Key type dns_resolver registered Mar 4 02:12:19.196629 kernel: IPI shorthand broadcast: enabled Mar 4 02:12:19.196651 kernel: sched_clock: Marking stable (1712004554, 244413452)->(2103039846, -146621840) Mar 4 02:12:19.196666 kernel: registered taskstats version 1 Mar 4 02:12:19.196679 kernel: Loading compiled-in X.509 certificates Mar 4 02:12:19.196693 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 02:12:19.196707 kernel: Key type .fscrypt registered Mar 4 02:12:19.196720 kernel: Key type fscrypt-provisioning registered Mar 4 02:12:19.196734 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 02:12:19.196748 kernel: ima: Allocated hash algorithm: sha1 Mar 4 02:12:19.196761 kernel: ima: No architecture policies found Mar 4 02:12:19.196780 kernel: clk: Disabling unused clocks Mar 4 02:12:19.196798 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 02:12:19.196812 kernel: Write protecting the kernel read-only data: 36864k Mar 4 02:12:19.196826 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 02:12:19.196839 kernel: Run /init as init process Mar 4 02:12:19.196853 kernel: with arguments: Mar 4 02:12:19.196866 kernel: /init Mar 4 02:12:19.196879 kernel: with environment: Mar 4 02:12:19.196892 kernel: HOME=/ Mar 4 02:12:19.196911 kernel: TERM=linux Mar 4 02:12:19.196934 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 02:12:19.196953 systemd[1]: Detected virtualization kvm. Mar 4 02:12:19.196968 systemd[1]: Detected architecture x86-64. Mar 4 02:12:19.196981 systemd[1]: Running in initrd. Mar 4 02:12:19.196996 systemd[1]: No hostname configured, using default hostname. Mar 4 02:12:19.197009 systemd[1]: Hostname set to . Mar 4 02:12:19.197030 systemd[1]: Initializing machine ID from VM UUID. Mar 4 02:12:19.197045 systemd[1]: Queued start job for default target initrd.target. Mar 4 02:12:19.197059 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 02:12:19.197073 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 02:12:19.197089 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 02:12:19.197103 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 02:12:19.197118 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 02:12:19.197132 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 02:12:19.197154 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 02:12:19.197169 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 02:12:19.197184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 02:12:19.197198 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 02:12:19.197213 systemd[1]: Reached target paths.target - Path Units. Mar 4 02:12:19.197227 systemd[1]: Reached target slices.target - Slice Units. Mar 4 02:12:19.197241 systemd[1]: Reached target swap.target - Swaps. Mar 4 02:12:19.197255 systemd[1]: Reached target timers.target - Timer Units. Mar 4 02:12:19.197275 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 02:12:19.197290 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 02:12:19.197304 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 02:12:19.197318 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 02:12:19.197333 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 02:12:19.197347 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 02:12:19.197361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 02:12:19.197376 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 02:12:19.197395 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 02:12:19.197410 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 02:12:19.197424 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 02:12:19.197438 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 02:12:19.197452 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 02:12:19.197467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 02:12:19.197481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:12:19.197495 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 02:12:19.197603 systemd-journald[201]: Collecting audit messages is disabled. Mar 4 02:12:19.197649 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 02:12:19.197664 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 02:12:19.197685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 02:12:19.197700 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 02:12:19.197714 kernel: Bridge firewalling registered Mar 4 02:12:19.197729 systemd-journald[201]: Journal started Mar 4 02:12:19.197767 systemd-journald[201]: Runtime Journal (/run/log/journal/ed273aee13da47f9bad869cfeff48756) is 4.7M, max 38.0M, 33.2M free. Mar 4 02:12:19.146659 systemd-modules-load[202]: Inserted module 'overlay' Mar 4 02:12:19.180865 systemd-modules-load[202]: Inserted module 'br_netfilter' Mar 4 02:12:19.238617 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 02:12:19.240496 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 02:12:19.241566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:12:19.251826 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 02:12:19.254671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 02:12:19.263757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 02:12:19.267567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 02:12:19.279776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 02:12:19.285635 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:12:19.296458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 02:12:19.298889 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 02:12:19.301176 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:12:19.308781 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 02:12:19.314760 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 02:12:19.325617 dracut-cmdline[236]: dracut-dracut-053 Mar 4 02:12:19.330089 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 02:12:19.368603 systemd-resolved[237]: Positive Trust Anchors: Mar 4 02:12:19.369871 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 02:12:19.369921 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 02:12:19.377920 systemd-resolved[237]: Defaulting to hostname 'linux'. Mar 4 02:12:19.380081 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 02:12:19.381280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 02:12:19.441684 kernel: SCSI subsystem initialized Mar 4 02:12:19.454607 kernel: Loading iSCSI transport class v2.0-870. Mar 4 02:12:19.468616 kernel: iscsi: registered transport (tcp) Mar 4 02:12:19.494637 kernel: iscsi: registered transport (qla4xxx) Mar 4 02:12:19.494716 kernel: QLogic iSCSI HBA Driver Mar 4 02:12:19.557444 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 02:12:19.563770 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 02:12:19.610768 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 02:12:19.610903 kernel: device-mapper: uevent: version 1.0.3 Mar 4 02:12:19.612623 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 02:12:19.665640 kernel: raid6: sse2x4 gen() 13841 MB/s Mar 4 02:12:19.681642 kernel: raid6: sse2x2 gen() 9818 MB/s Mar 4 02:12:19.700290 kernel: raid6: sse2x1 gen() 10281 MB/s Mar 4 02:12:19.700431 kernel: raid6: using algorithm sse2x4 gen() 13841 MB/s Mar 4 02:12:19.719273 kernel: raid6: .... xor() 7708 MB/s, rmw enabled Mar 4 02:12:19.719389 kernel: raid6: using ssse3x2 recovery algorithm Mar 4 02:12:19.745619 kernel: xor: automatically using best checksumming function avx Mar 4 02:12:19.945650 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 02:12:19.961553 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 02:12:19.968795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 02:12:19.999735 systemd-udevd[420]: Using default interface naming scheme 'v255'. Mar 4 02:12:20.008634 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 02:12:20.016770 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 02:12:20.043626 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Mar 4 02:12:20.087744 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 02:12:20.095806 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 02:12:20.221467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 02:12:20.229995 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 02:12:20.266286 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 02:12:20.268484 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 02:12:20.271160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 02:12:20.273524 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 02:12:20.282238 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 02:12:20.314265 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 02:12:20.350345 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 4 02:12:20.366612 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 02:12:20.384180 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 4 02:12:20.396701 kernel: AVX version of gcm_enc/dec engaged. Mar 4 02:12:20.408753 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 02:12:20.409047 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:12:20.412885 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 02:12:20.413679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 02:12:20.413875 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:12:20.458771 kernel: ACPI: bus type USB registered Mar 4 02:12:20.458824 kernel: usbcore: registered new interface driver usbfs Mar 4 02:12:20.458857 kernel: usbcore: registered new interface driver hub Mar 4 02:12:20.458888 kernel: usbcore: registered new device driver usb Mar 4 02:12:20.458938 kernel: AES CTR mode by8 optimization enabled Mar 4 02:12:20.458980 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 02:12:20.459050 kernel: GPT:17805311 != 125829119 Mar 4 02:12:20.459093 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 02:12:20.459125 kernel: GPT:17805311 != 125829119 Mar 4 02:12:20.459153 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 02:12:20.459191 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:12:20.414653 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:12:20.423919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:12:20.475596 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 4 02:12:20.476078 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 4 02:12:20.476377 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 4 02:12:20.487943 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 4 02:12:20.488337 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 4 02:12:20.488631 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 4 02:12:20.491629 kernel: hub 1-0:1.0: USB hub found Mar 4 02:12:20.516603 kernel: hub 1-0:1.0: 4 ports detected Mar 4 02:12:20.516939 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Mar 4 02:12:20.588846 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 02:12:20.597613 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 4 02:12:20.597932 kernel: hub 2-0:1.0: USB hub found Mar 4 02:12:20.599600 kernel: libata version 3.00 loaded. Mar 4 02:12:20.600596 kernel: hub 2-0:1.0: 4 ports detected Mar 4 02:12:20.618025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 02:12:20.716827 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (477) Mar 4 02:12:20.716872 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 02:12:20.717328 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 02:12:20.717352 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 02:12:20.717619 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 02:12:20.717846 kernel: scsi host0: ahci Mar 4 02:12:20.718162 kernel: scsi host1: ahci Mar 4 02:12:20.718391 kernel: scsi host2: ahci Mar 4 02:12:20.718658 kernel: scsi host3: ahci Mar 4 02:12:20.718888 kernel: scsi host4: ahci Mar 4 02:12:20.719119 kernel: scsi host5: ahci Mar 4 02:12:20.719366 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 4 02:12:20.719387 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 4 02:12:20.719404 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 4 02:12:20.719422 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 4 02:12:20.719439 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 4 02:12:20.719474 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 4 02:12:20.723233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:12:20.731757 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 02:12:20.738084 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 02:12:20.738976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 02:12:20.745755 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 02:12:20.748529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 02:12:20.758239 disk-uuid[562]: Primary Header is updated. Mar 4 02:12:20.758239 disk-uuid[562]: Secondary Entries is updated. Mar 4 02:12:20.758239 disk-uuid[562]: Secondary Header is updated. Mar 4 02:12:20.766609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:12:20.775093 kernel: GPT:disk_guids don't match. Mar 4 02:12:20.775167 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 02:12:20.777230 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:12:20.784900 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:12:20.788659 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:12:20.791645 kernel: block device autoloading is deprecated and will be removed. Mar 4 02:12:20.843256 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 4 02:12:20.956193 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 02:12:20.956272 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 02:12:20.957616 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 02:12:20.960660 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 4 02:12:20.960696 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 02:12:20.963733 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 02:12:21.011618 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 02:12:21.029177 kernel: usbcore: registered new interface driver usbhid Mar 4 02:12:21.029232 kernel: usbhid: USB HID core driver Mar 4 02:12:21.042123 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 4 02:12:21.042174 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 4 02:12:21.786072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:12:21.788795 disk-uuid[564]: The operation has completed successfully. Mar 4 02:12:21.792594 kernel: block device autoloading is deprecated and will be removed. Mar 4 02:12:21.851921 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 02:12:21.853227 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 02:12:21.883835 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 02:12:21.901250 sh[589]: Success Mar 4 02:12:21.922780 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 4 02:12:21.984018 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 02:12:21.987709 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 02:12:21.990655 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 02:12:22.016855 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 02:12:22.016993 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:12:22.018990 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 02:12:22.022490 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 02:12:22.022531 kernel: BTRFS info (device dm-0): using free space tree Mar 4 02:12:22.033625 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 02:12:22.035997 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 02:12:22.042797 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 02:12:22.045758 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 02:12:22.069621 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:12:22.069744 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:12:22.069769 kernel: BTRFS info (device vda6): using free space tree Mar 4 02:12:22.077597 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 02:12:22.094236 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:12:22.093602 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 02:12:22.103318 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 02:12:22.113817 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 02:12:22.241163 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 02:12:22.254947 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 02:12:22.290864 ignition[683]: Ignition 2.19.0 Mar 4 02:12:22.293410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 02:12:22.290887 ignition[683]: Stage: fetch-offline Mar 4 02:12:22.290995 ignition[683]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:12:22.291017 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:12:22.291220 ignition[683]: parsed url from cmdline: "" Mar 4 02:12:22.291228 ignition[683]: no config URL provided Mar 4 02:12:22.291238 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 02:12:22.291256 ignition[683]: no config at "/usr/lib/ignition/user.ign" Mar 4 02:12:22.291266 ignition[683]: failed to fetch config: resource requires networking Mar 4 02:12:22.302222 systemd-networkd[772]: lo: Link UP Mar 4 02:12:22.291589 ignition[683]: Ignition finished successfully Mar 4 02:12:22.302229 systemd-networkd[772]: lo: Gained carrier Mar 4 02:12:22.305226 systemd-networkd[772]: Enumeration completed Mar 4 02:12:22.305817 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 02:12:22.305833 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:12:22.305839 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 02:12:22.307295 systemd[1]: Reached target network.target - Network. Mar 4 02:12:22.308194 systemd-networkd[772]: eth0: Link UP Mar 4 02:12:22.308201 systemd-networkd[772]: eth0: Gained carrier Mar 4 02:12:22.308213 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:12:22.314768 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 02:12:22.354704 systemd-networkd[772]: eth0: DHCPv4 address 10.230.63.210/30, gateway 10.230.63.209 acquired from 10.230.63.209 Mar 4 02:12:22.365036 ignition[779]: Ignition 2.19.0 Mar 4 02:12:22.365055 ignition[779]: Stage: fetch Mar 4 02:12:22.365296 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:12:22.365319 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:12:22.365509 ignition[779]: parsed url from cmdline: "" Mar 4 02:12:22.365516 ignition[779]: no config URL provided Mar 4 02:12:22.365545 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 02:12:22.365564 ignition[779]: no config at "/usr/lib/ignition/user.ign" Mar 4 02:12:22.365759 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 4 02:12:22.365797 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 4 02:12:22.365829 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 4 02:12:22.381723 ignition[779]: GET result: OK Mar 4 02:12:22.382528 ignition[779]: parsing config with SHA512: bad4d24fef0a6c58026dc0b703afd88375007554ee05db9293a8d0387bb38aa5f66e0f027e95ba60c4fca3b7a335f985baf5445cd974109fa08e2a415cd7c190 Mar 4 02:12:22.388183 unknown[779]: fetched base config from "system" Mar 4 02:12:22.388200 unknown[779]: fetched base config from "system" Mar 4 02:12:22.388210 unknown[779]: fetched user config from "openstack" Mar 4 02:12:22.391248 ignition[779]: fetch: fetch complete Mar 4 02:12:22.391268 ignition[779]: fetch: fetch passed Mar 4 02:12:22.394064 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 02:12:22.391377 ignition[779]: Ignition finished successfully Mar 4 02:12:22.405866 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 02:12:22.432343 ignition[786]: Ignition 2.19.0 Mar 4 02:12:22.432370 ignition[786]: Stage: kargs Mar 4 02:12:22.432664 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:12:22.432688 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:12:22.436284 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 02:12:22.434002 ignition[786]: kargs: kargs passed Mar 4 02:12:22.434098 ignition[786]: Ignition finished successfully Mar 4 02:12:22.450344 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 02:12:22.479939 ignition[792]: Ignition 2.19.0 Mar 4 02:12:22.479963 ignition[792]: Stage: disks Mar 4 02:12:22.480249 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:12:22.480271 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:12:22.481675 ignition[792]: disks: disks passed Mar 4 02:12:22.484070 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 02:12:22.481759 ignition[792]: Ignition finished successfully Mar 4 02:12:22.486044 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 02:12:22.487536 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 02:12:22.489093 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 02:12:22.490736 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 02:12:22.492350 systemd[1]: Reached target basic.target - Basic System. Mar 4 02:12:22.504813 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 02:12:22.529732 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 4 02:12:22.534386 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 02:12:22.539709 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 02:12:22.674606 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 02:12:22.676166 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 02:12:22.678411 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 02:12:22.685695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 02:12:22.697781 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 02:12:22.698972 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 02:12:22.705255 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 4 02:12:22.708041 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 02:12:22.718918 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Mar 4 02:12:22.718952 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:12:22.718973 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:12:22.718991 kernel: BTRFS info (device vda6): using free space tree Mar 4 02:12:22.709236 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 02:12:22.722608 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 02:12:22.724475 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 02:12:22.727536 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 02:12:22.739804 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 02:12:22.871144 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 02:12:22.882846 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 4 02:12:22.889834 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 02:12:22.902787 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 02:12:23.041739 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 02:12:23.058735 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 02:12:23.063908 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 02:12:23.073725 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 02:12:23.076289 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:12:23.114996 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 02:12:23.125811 ignition[924]: INFO : Ignition 2.19.0 Mar 4 02:12:23.125811 ignition[924]: INFO : Stage: mount Mar 4 02:12:23.127620 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 02:12:23.127620 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:12:23.129479 ignition[924]: INFO : mount: mount passed Mar 4 02:12:23.129479 ignition[924]: INFO : Ignition finished successfully Mar 4 02:12:23.129879 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 02:12:24.251051 systemd-networkd[772]: eth0: Gained IPv6LL Mar 4 02:12:25.760195 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ff4:24:19ff:fee6:3fd2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ff4:24:19ff:fee6:3fd2/64 assigned by NDisc. Mar 4 02:12:25.760214 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 4 02:12:29.975511 coreos-metadata[810]: Mar 04 02:12:29.975 WARN failed to locate config-drive, using the metadata service API instead Mar 4 02:12:29.999134 coreos-metadata[810]: Mar 04 02:12:29.999 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 4 02:12:30.012401 coreos-metadata[810]: Mar 04 02:12:30.012 INFO Fetch successful Mar 4 02:12:30.013277 coreos-metadata[810]: Mar 04 02:12:30.012 INFO wrote hostname srv-323j1.gb1.brightbox.com to /sysroot/etc/hostname Mar 4 02:12:30.016808 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 4 02:12:30.017026 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 4 02:12:30.030782 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 02:12:30.050806 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 02:12:30.076605 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Mar 4 02:12:30.083112 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:12:30.083161 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:12:30.083606 kernel: BTRFS info (device vda6): using free space tree Mar 4 02:12:30.089601 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 02:12:30.093312 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 02:12:30.130132 ignition[959]: INFO : Ignition 2.19.0 Mar 4 02:12:30.131542 ignition[959]: INFO : Stage: files Mar 4 02:12:30.133663 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 02:12:30.133663 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:12:30.135836 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 4 02:12:30.137090 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 02:12:30.137090 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 02:12:30.141289 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 02:12:30.142536 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 02:12:30.143747 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 02:12:30.143152 unknown[959]: wrote ssh authorized keys file for user: core Mar 4 02:12:30.145940 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 02:12:30.145940 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 02:12:30.303476 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 4 02:12:30.639752 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 02:12:30.639752 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 02:12:30.642426 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 4 02:12:31.065696 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 02:12:31.391859 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 4 02:12:31.391859 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 02:12:31.395212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 4 02:12:31.659212 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 02:12:33.180007 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 4 02:12:33.180007 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 4 02:12:33.186166 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 02:12:33.186166 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 02:12:33.186166 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 4 02:12:33.186166 ignition[959]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 4 02:12:33.186166 ignition[959]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 02:12:33.186166 ignition[959]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 02:12:33.186166 ignition[959]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 02:12:33.186166 ignition[959]: INFO : files: files passed Mar 4 02:12:33.186166 ignition[959]: INFO : Ignition finished successfully Mar 4 02:12:33.187838 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 02:12:33.207375 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 02:12:33.212838 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 02:12:33.216096 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 02:12:33.216310 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 02:12:33.240600 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 02:12:33.240600 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 02:12:33.244928 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 02:12:33.246665 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 02:12:33.248216 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 02:12:33.252813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 02:12:33.322766 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 02:12:33.322981 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 02:12:33.324865 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 02:12:33.326145 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 02:12:33.327916 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 02:12:33.340420 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 02:12:33.358766 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 02:12:33.366817 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 02:12:33.382724 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 02:12:33.384683 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 02:12:33.386676 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 02:12:33.387445 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 02:12:33.387679 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 02:12:33.389373 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 02:12:33.391097 systemd[1]: Stopped target basic.target - Basic System. Mar 4 02:12:33.392563 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 02:12:33.393975 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 02:12:33.395466 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 02:12:33.397091 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 02:12:33.398865 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 02:12:33.400609 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 02:12:33.402248 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 02:12:33.403911 systemd[1]: Stopped target swap.target - Swaps. Mar 4 02:12:33.405316 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 02:12:33.405608 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 02:12:33.407337 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 02:12:33.408266 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 02:12:33.409692 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 02:12:33.410190 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 02:12:33.411561 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 02:12:33.411854 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 02:12:33.413736 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 02:12:33.413917 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 02:12:33.414887 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 02:12:33.415051 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 02:12:33.422854 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 02:12:33.423601 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 02:12:33.423788 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 02:12:33.429766 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 02:12:33.438521 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 02:12:33.438826 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 02:12:33.442336 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 02:12:33.442547 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 02:12:33.453434 ignition[1012]: INFO : Ignition 2.19.0 Mar 4 02:12:33.453434 ignition[1012]: INFO : Stage: umount Mar 4 02:12:33.453434 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 02:12:33.453434 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:12:33.455814 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 02:12:33.461014 ignition[1012]: INFO : umount: umount passed Mar 4 02:12:33.461014 ignition[1012]: INFO : Ignition finished successfully Mar 4 02:12:33.455992 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 02:12:33.464143 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 02:12:33.464322 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 02:12:33.465783 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 02:12:33.465903 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 02:12:33.468323 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 02:12:33.468446 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 02:12:33.469759 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 02:12:33.469849 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 02:12:33.473721 systemd[1]: Stopped target network.target - Network. Mar 4 02:12:33.474726 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 02:12:33.474816 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 02:12:33.476316 systemd[1]: Stopped target paths.target - Path Units. Mar 4 02:12:33.477668 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 02:12:33.478224 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 02:12:33.479960 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 02:12:33.480609 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 02:12:33.481363 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 02:12:33.481447 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 02:12:33.483720 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 02:12:33.483808 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 02:12:33.485665 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 02:12:33.485746 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 02:12:33.486645 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 02:12:33.486735 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 02:12:33.489178 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 02:12:33.491453 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 02:12:33.495227 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 02:12:33.497395 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 02:12:33.497911 systemd-networkd[772]: eth0: DHCPv6 lease lost Mar 4 02:12:33.502711 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 02:12:33.504370 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 02:12:33.504474 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 02:12:33.507416 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 02:12:33.507743 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 02:12:33.512122 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 02:12:33.512695 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 02:12:33.515550 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 02:12:33.515715 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 02:12:33.522846 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 02:12:33.523649 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 02:12:33.523784 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 02:12:33.524725 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 02:12:33.524802 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:12:33.526260 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 02:12:33.526333 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 02:12:33.527813 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 02:12:33.527885 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 02:12:33.531985 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 02:12:33.546480 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 02:12:33.546968 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 02:12:33.549464 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 02:12:33.549632 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 02:12:33.551997 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 02:12:33.552136 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 02:12:33.553013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 02:12:33.553090 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 02:12:33.554545 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 02:12:33.554680 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 02:12:33.556772 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 02:12:33.556844 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 02:12:33.558478 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 02:12:33.558553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:12:33.574839 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 02:12:33.578650 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 02:12:33.578747 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 02:12:33.580506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 02:12:33.580617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:12:33.587102 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 02:12:33.587258 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 02:12:33.588375 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 02:12:33.599923 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 02:12:33.609760 systemd[1]: Switching root. Mar 4 02:12:33.641229 systemd-journald[201]: Journal stopped Mar 4 02:12:35.312149 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Mar 4 02:12:35.312260 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 02:12:35.312288 kernel: SELinux: policy capability open_perms=1 Mar 4 02:12:35.312308 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 02:12:35.312327 kernel: SELinux: policy capability always_check_network=0 Mar 4 02:12:35.312345 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 02:12:35.312365 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 02:12:35.312403 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 02:12:35.312432 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 02:12:35.312453 kernel: audit: type=1403 audit(1772590353.894:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 02:12:35.312487 systemd[1]: Successfully loaded SELinux policy in 57.025ms. Mar 4 02:12:35.312522 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.419ms. Mar 4 02:12:35.312545 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 02:12:35.312566 systemd[1]: Detected virtualization kvm. Mar 4 02:12:35.313628 systemd[1]: Detected architecture x86-64. Mar 4 02:12:35.313655 systemd[1]: Detected first boot. Mar 4 02:12:35.313697 systemd[1]: Hostname set to . Mar 4 02:12:35.313721 systemd[1]: Initializing machine ID from VM UUID. Mar 4 02:12:35.313742 zram_generator::config[1058]: No configuration found. Mar 4 02:12:35.313774 systemd[1]: Populated /etc with preset unit settings. Mar 4 02:12:35.313797 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 4 02:12:35.313825 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 4 02:12:35.313846 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 4 02:12:35.313876 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 02:12:35.313914 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 02:12:35.313937 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 02:12:35.313958 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 02:12:35.313980 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 02:12:35.314001 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 02:12:35.314032 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 02:12:35.314055 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 02:12:35.314077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 02:12:35.314098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 02:12:35.314136 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 02:12:35.314160 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 02:12:35.314182 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 02:12:35.314203 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 02:12:35.314226 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 02:12:35.314248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 02:12:35.314269 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 4 02:12:35.314306 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 4 02:12:35.314331 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 4 02:12:35.314352 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 02:12:35.314373 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 02:12:35.314394 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 02:12:35.314431 systemd[1]: Reached target slices.target - Slice Units. Mar 4 02:12:35.314477 systemd[1]: Reached target swap.target - Swaps. Mar 4 02:12:35.314501 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 02:12:35.314523 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 02:12:35.314543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 02:12:35.314564 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 02:12:35.315627 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 02:12:35.315654 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 02:12:35.315676 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 02:12:35.315718 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 02:12:35.315742 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 02:12:35.315763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:35.315785 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 02:12:35.315806 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 02:12:35.315828 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 02:12:35.315849 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 02:12:35.315870 systemd[1]: Reached target machines.target - Containers. Mar 4 02:12:35.315914 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 02:12:35.315937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:12:35.315959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 02:12:35.315980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 02:12:35.316001 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 02:12:35.316033 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 02:12:35.316055 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 02:12:35.316077 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 02:12:35.316099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 02:12:35.316134 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 02:12:35.316157 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 4 02:12:35.316178 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 4 02:12:35.316199 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 4 02:12:35.316221 systemd[1]: Stopped systemd-fsck-usr.service. Mar 4 02:12:35.316243 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 02:12:35.316264 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 02:12:35.316294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 02:12:35.316337 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 02:12:35.316362 kernel: ACPI: bus type drm_connector registered Mar 4 02:12:35.316383 kernel: fuse: init (API version 7.39) Mar 4 02:12:35.316403 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 02:12:35.316423 systemd[1]: verity-setup.service: Deactivated successfully. Mar 4 02:12:35.316444 systemd[1]: Stopped verity-setup.service. Mar 4 02:12:35.316465 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:35.316486 kernel: loop: module loaded Mar 4 02:12:35.316505 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 02:12:35.316539 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 02:12:35.316563 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 02:12:35.317652 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 02:12:35.317681 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 02:12:35.317702 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 02:12:35.317744 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 02:12:35.317768 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 02:12:35.317790 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 02:12:35.317811 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 02:12:35.317832 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 02:12:35.317852 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 02:12:35.317873 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 02:12:35.317911 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 02:12:35.317934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 02:12:35.317970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 02:12:35.317994 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 02:12:35.318015 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 02:12:35.318049 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 02:12:35.318092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 02:12:35.318116 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 02:12:35.318138 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 02:12:35.318160 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 02:12:35.318180 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 02:12:35.318261 systemd-journald[1154]: Collecting audit messages is disabled. Mar 4 02:12:35.318301 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 02:12:35.318344 systemd-journald[1154]: Journal started Mar 4 02:12:35.318412 systemd-journald[1154]: Runtime Journal (/run/log/journal/ed273aee13da47f9bad869cfeff48756) is 4.7M, max 38.0M, 33.2M free. Mar 4 02:12:34.769899 systemd[1]: Queued start job for default target multi-user.target. Mar 4 02:12:34.794550 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 02:12:34.795307 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 4 02:12:35.323698 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 02:12:35.330614 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 02:12:35.332647 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 02:12:35.336598 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 02:12:35.355630 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 02:12:35.367593 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 02:12:35.373603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:12:35.382646 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 02:12:35.382704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 02:12:35.396599 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 02:12:35.403703 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 02:12:35.412616 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 02:12:35.419603 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 02:12:35.434604 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 02:12:35.451609 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 02:12:35.461025 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 02:12:35.462788 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 02:12:35.468837 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 02:12:35.470134 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 02:12:35.529914 kernel: loop0: detected capacity change from 0 to 140768 Mar 4 02:12:35.534786 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 02:12:35.546305 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 02:12:35.555331 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 02:12:35.601072 systemd-journald[1154]: Time spent on flushing to /var/log/journal/ed273aee13da47f9bad869cfeff48756 is 59.673ms for 1149 entries. Mar 4 02:12:35.601072 systemd-journald[1154]: System Journal (/var/log/journal/ed273aee13da47f9bad869cfeff48756) is 8.0M, max 584.8M, 576.8M free. Mar 4 02:12:35.695514 systemd-journald[1154]: Received client request to flush runtime journal. Mar 4 02:12:35.695594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 02:12:35.695644 kernel: loop1: detected capacity change from 0 to 217752 Mar 4 02:12:35.604318 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 02:12:35.608768 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 02:12:35.635177 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 02:12:35.644791 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 02:12:35.680207 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:12:35.700253 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 02:12:35.702019 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 02:12:35.714818 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 02:12:35.736393 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 4 02:12:35.783598 kernel: loop2: detected capacity change from 0 to 8 Mar 4 02:12:35.823617 kernel: loop3: detected capacity change from 0 to 142488 Mar 4 02:12:35.850980 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 4 02:12:35.851019 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 4 02:12:35.863403 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 02:12:35.887715 kernel: loop4: detected capacity change from 0 to 140768 Mar 4 02:12:35.946752 kernel: loop5: detected capacity change from 0 to 217752 Mar 4 02:12:35.974622 kernel: loop6: detected capacity change from 0 to 8 Mar 4 02:12:35.995821 kernel: loop7: detected capacity change from 0 to 142488 Mar 4 02:12:36.020468 (sd-merge)[1218]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 4 02:12:36.024986 (sd-merge)[1218]: Merged extensions into '/usr'. Mar 4 02:12:36.033748 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 02:12:36.033861 systemd[1]: Reloading... Mar 4 02:12:36.201599 zram_generator::config[1244]: No configuration found. Mar 4 02:12:36.528113 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:12:36.533597 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 02:12:36.600914 systemd[1]: Reloading finished in 566 ms. Mar 4 02:12:36.647874 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 02:12:36.649652 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 02:12:36.660856 systemd[1]: Starting ensure-sysext.service... Mar 4 02:12:36.671875 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 02:12:36.685487 systemd[1]: Reloading requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Mar 4 02:12:36.685625 systemd[1]: Reloading... Mar 4 02:12:36.736952 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 02:12:36.737569 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 02:12:36.740503 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 02:12:36.742022 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Mar 4 02:12:36.742147 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Mar 4 02:12:36.750928 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 02:12:36.751498 systemd-tmpfiles[1301]: Skipping /boot Mar 4 02:12:36.771688 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 02:12:36.771865 systemd-tmpfiles[1301]: Skipping /boot Mar 4 02:12:36.793608 zram_generator::config[1324]: No configuration found. Mar 4 02:12:36.983859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:12:37.054055 systemd[1]: Reloading finished in 367 ms. Mar 4 02:12:37.078061 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 02:12:37.086650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 02:12:37.097855 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 02:12:37.103800 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 02:12:37.114904 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 02:12:37.120816 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 02:12:37.131754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 02:12:37.139825 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 02:12:37.160023 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 02:12:37.164942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:37.165253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:12:37.175710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 02:12:37.180719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 02:12:37.190915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 02:12:37.192500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:12:37.192694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:37.196755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:37.197057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:12:37.197322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:12:37.197478 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:37.205379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:37.205757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:12:37.213995 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 02:12:37.216498 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:12:37.216735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:12:37.221609 systemd[1]: Finished ensure-sysext.service. Mar 4 02:12:37.237089 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 02:12:37.250590 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 02:12:37.275709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 02:12:37.276788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 02:12:37.279057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 02:12:37.285413 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 02:12:37.293854 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 02:12:37.296685 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 02:12:37.299188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 02:12:37.299437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 02:12:37.300755 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 02:12:37.301648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 02:12:37.304771 systemd-udevd[1397]: Using default interface naming scheme 'v255'. Mar 4 02:12:37.310838 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 02:12:37.312121 augenrules[1425]: No rules Mar 4 02:12:37.312917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 02:12:37.312989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 02:12:37.315546 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 02:12:37.323302 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 02:12:37.323609 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 02:12:37.350847 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 02:12:37.356781 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 02:12:37.369799 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 02:12:37.487328 systemd-resolved[1396]: Positive Trust Anchors: Mar 4 02:12:37.487882 systemd-resolved[1396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 02:12:37.488051 systemd-resolved[1396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 02:12:37.501804 systemd-resolved[1396]: Using system hostname 'srv-323j1.gb1.brightbox.com'. Mar 4 02:12:37.505747 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 02:12:37.507846 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 02:12:37.513338 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 02:12:37.514230 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 02:12:37.555535 systemd-networkd[1438]: lo: Link UP Mar 4 02:12:37.558120 systemd-networkd[1438]: lo: Gained carrier Mar 4 02:12:37.563667 systemd-networkd[1438]: Enumeration completed Mar 4 02:12:37.563902 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 02:12:37.568645 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:12:37.568658 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 02:12:37.571110 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:12:37.573030 systemd[1]: Reached target network.target - Network. Mar 4 02:12:37.573120 systemd-networkd[1438]: eth0: Link UP Mar 4 02:12:37.573128 systemd-networkd[1438]: eth0: Gained carrier Mar 4 02:12:37.573145 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:12:37.582886 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 02:12:37.602170 systemd-networkd[1438]: eth0: DHCPv4 address 10.230.63.210/30, gateway 10.230.63.209 acquired from 10.230.63.209 Mar 4 02:12:37.608414 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 4 02:12:37.647813 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 4 02:12:37.680611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1444) Mar 4 02:12:37.707603 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 02:12:37.747035 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 4 02:12:37.774434 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 02:12:37.780631 kernel: ACPI: button: Power Button [PWRF] Mar 4 02:12:37.783818 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 02:12:37.817654 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 02:12:37.834664 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 02:12:37.839165 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 02:12:37.839484 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 02:12:37.870118 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 4 02:12:37.956015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:12:38.190755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:12:38.195787 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 02:12:38.204841 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 02:12:38.227791 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 02:12:38.281509 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 02:12:38.282849 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 02:12:38.283712 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 02:12:38.284743 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 02:12:38.285753 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 02:12:38.287150 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 02:12:38.288331 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 02:12:38.289150 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 02:12:38.289971 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 02:12:38.290023 systemd[1]: Reached target paths.target - Path Units. Mar 4 02:12:38.290743 systemd[1]: Reached target timers.target - Timer Units. Mar 4 02:12:38.295721 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 02:12:38.299126 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 02:12:38.308695 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 02:12:38.311787 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 02:12:38.313458 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 02:12:38.314440 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 02:12:38.315271 systemd[1]: Reached target basic.target - Basic System. Mar 4 02:12:38.316237 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 02:12:38.316422 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 02:12:38.324642 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 02:12:38.329125 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 4 02:12:38.333494 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 02:12:38.335995 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 02:12:38.341973 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 02:12:38.351864 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 02:12:38.353674 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 02:12:38.357833 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 02:12:38.361771 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 02:12:38.369897 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 02:12:38.378779 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 02:12:38.392939 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 02:12:38.397334 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 02:12:38.398335 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 02:12:38.405904 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 02:12:38.411762 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 02:12:38.423322 jq[1482]: false Mar 4 02:12:38.426224 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 02:12:38.426567 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 02:12:38.443210 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 02:12:38.443503 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 02:12:38.476139 extend-filesystems[1483]: Found loop4 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found loop5 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found loop6 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found loop7 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda1 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda2 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda3 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found usr Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda4 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda6 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda7 Mar 4 02:12:38.499241 extend-filesystems[1483]: Found vda9 Mar 4 02:12:38.499241 extend-filesystems[1483]: Checking size of /dev/vda9 Mar 4 02:12:38.482093 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 02:12:38.479878 dbus-daemon[1481]: [system] SELinux support is enabled Mar 4 02:12:38.499972 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 02:12:38.498867 dbus-daemon[1481]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1438 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 4 02:12:38.519705 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 02:12:38.520531 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 4 02:12:38.537408 jq[1493]: true Mar 4 02:12:38.519770 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 02:12:38.524087 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 02:12:38.524122 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 02:12:38.526092 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 02:12:38.528089 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 02:12:38.578694 extend-filesystems[1483]: Resized partition /dev/vda9 Mar 4 02:12:38.598701 extend-filesystems[1522]: resize2fs 1.47.1 (20-May-2024) Mar 4 02:12:38.601268 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 02:12:38.623035 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 4 02:12:38.601842 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 4 02:12:38.623527 tar[1501]: linux-amd64/LICENSE Mar 4 02:12:38.623527 tar[1501]: linux-amd64/helm Mar 4 02:12:38.624025 update_engine[1491]: I20260304 02:12:38.615298 1491 main.cc:92] Flatcar Update Engine starting Mar 4 02:12:38.644617 systemd[1]: Started update-engine.service - Update Engine. Mar 4 02:12:38.647005 update_engine[1491]: I20260304 02:12:38.645843 1491 update_check_scheduler.cc:74] Next update check in 7m29s Mar 4 02:12:38.648932 jq[1515]: true Mar 4 02:12:38.654792 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 02:12:38.671667 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1440) Mar 4 02:12:38.747605 systemd-logind[1490]: Watching system buttons on /dev/input/event2 (Power Button) Mar 4 02:12:38.748139 systemd-logind[1490]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 02:12:38.753263 systemd-logind[1490]: New seat seat0. Mar 4 02:12:38.756440 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 02:12:39.229852 locksmithd[1523]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 02:12:39.263298 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Mar 4 02:12:39.265369 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 02:12:39.296866 systemd-networkd[1438]: eth0: Gained IPv6LL Mar 4 02:12:39.307125 systemd[1]: Starting sshkeys.service... Mar 4 02:12:39.312451 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 4 02:12:39.317466 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 02:12:39.322285 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 4 02:12:39.323520 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 4 02:12:39.325421 dbus-daemon[1481]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1521 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 4 02:12:39.330732 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 02:12:39.342845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:12:39.353935 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 02:12:39.366884 systemd[1]: Starting polkit.service - Authorization Manager... Mar 4 02:12:39.418619 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 4 02:12:39.424806 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 4 02:12:39.436175 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 4 02:12:39.447145 polkitd[1554]: Started polkitd version 121 Mar 4 02:12:39.729031 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 02:12:39.729031 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 4 02:12:39.729031 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 4 02:12:39.695920 polkitd[1554]: Loading rules from directory /etc/polkit-1/rules.d Mar 4 02:12:39.771466 containerd[1512]: time="2026-03-04T02:12:39.722898419Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 02:12:39.772310 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Mar 4 02:12:39.768162 systemd[1]: Started polkit.service - Authorization Manager. Mar 4 02:12:39.696061 polkitd[1554]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 4 02:12:39.773663 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 02:12:39.701225 polkitd[1554]: Finished loading, compiling and executing 2 rules Mar 4 02:12:39.774565 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 02:12:39.765515 dbus-daemon[1481]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 4 02:12:39.768384 polkitd[1554]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 4 02:12:39.818511 systemd-hostnamed[1521]: Hostname set to (static) Mar 4 02:12:39.820255 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 02:12:39.839310 containerd[1512]: time="2026-03-04T02:12:39.836752019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:12:39.838022 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 4 02:12:39.839229 systemd-networkd[1438]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8ff4:24:19ff:fee6:3fd2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8ff4:24:19ff:fee6:3fd2/64 assigned by NDisc. Mar 4 02:12:39.839245 systemd-networkd[1438]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 4 02:12:39.848077 containerd[1512]: time="2026-03-04T02:12:39.848026133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:12:39.850488 containerd[1512]: time="2026-03-04T02:12:39.849635403Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 02:12:39.850488 containerd[1512]: time="2026-03-04T02:12:39.849692528Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 02:12:39.850488 containerd[1512]: time="2026-03-04T02:12:39.850130260Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 02:12:39.850488 containerd[1512]: time="2026-03-04T02:12:39.850162025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 02:12:39.850488 containerd[1512]: time="2026-03-04T02:12:39.850336082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:12:39.850488 containerd[1512]: time="2026-03-04T02:12:39.850370214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:12:39.851528 containerd[1512]: time="2026-03-04T02:12:39.851494503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:12:39.854460 containerd[1512]: time="2026-03-04T02:12:39.853625135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 02:12:39.854460 containerd[1512]: time="2026-03-04T02:12:39.853662000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:12:39.854460 containerd[1512]: time="2026-03-04T02:12:39.853681385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 02:12:39.854460 containerd[1512]: time="2026-03-04T02:12:39.853898767Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:12:39.854460 containerd[1512]: time="2026-03-04T02:12:39.854398547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:12:39.854889 containerd[1512]: time="2026-03-04T02:12:39.854836327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:12:39.855591 containerd[1512]: time="2026-03-04T02:12:39.855184733Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 02:12:39.856303 containerd[1512]: time="2026-03-04T02:12:39.855569056Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 02:12:39.856559 containerd[1512]: time="2026-03-04T02:12:39.856532582Z" level=info msg="metadata content store policy set" policy=shared Mar 4 02:12:39.865743 containerd[1512]: time="2026-03-04T02:12:39.864115914Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 02:12:39.865743 containerd[1512]: time="2026-03-04T02:12:39.864217132Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 02:12:39.865743 containerd[1512]: time="2026-03-04T02:12:39.864273652Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 02:12:39.865743 containerd[1512]: time="2026-03-04T02:12:39.864308956Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 02:12:39.865743 containerd[1512]: time="2026-03-04T02:12:39.864380126Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 02:12:39.866078 containerd[1512]: time="2026-03-04T02:12:39.866047902Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 02:12:39.867136 containerd[1512]: time="2026-03-04T02:12:39.867104385Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 02:12:39.870674 containerd[1512]: time="2026-03-04T02:12:39.868020185Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 02:12:39.870674 containerd[1512]: time="2026-03-04T02:12:39.868057011Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 02:12:39.870674 containerd[1512]: time="2026-03-04T02:12:39.868110388Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 02:12:39.870674 containerd[1512]: time="2026-03-04T02:12:39.868135910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.870674 containerd[1512]: time="2026-03-04T02:12:39.868203213Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.870674 containerd[1512]: time="2026-03-04T02:12:39.868255712Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.870674 containerd[1512]: time="2026-03-04T02:12:39.868290158Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871001808Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871040991Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871067472Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871088262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871128617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871154116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871175855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871203085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871225836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871246953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871267260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871290367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871314067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.871598 containerd[1512]: time="2026-03-04T02:12:39.871337991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.872147 containerd[1512]: time="2026-03-04T02:12:39.871358956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.872147 containerd[1512]: time="2026-03-04T02:12:39.871386141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.872147 containerd[1512]: time="2026-03-04T02:12:39.871415302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.872147 containerd[1512]: time="2026-03-04T02:12:39.871442497Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 02:12:39.872147 containerd[1512]: time="2026-03-04T02:12:39.871488109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.872380 containerd[1512]: time="2026-03-04T02:12:39.872352592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.872476 containerd[1512]: time="2026-03-04T02:12:39.872452547Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872707358Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872801279Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872829656Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872850702Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872868725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872913421Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872935116Z" level=info msg="NRI interface is disabled by configuration." Mar 4 02:12:39.873017 containerd[1512]: time="2026-03-04T02:12:39.872952513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 02:12:39.873609 containerd[1512]: time="2026-03-04T02:12:39.873474403Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 02:12:39.874081 containerd[1512]: time="2026-03-04T02:12:39.873630730Z" level=info msg="Connect containerd service" Mar 4 02:12:39.874081 containerd[1512]: time="2026-03-04T02:12:39.873709961Z" level=info msg="using legacy CRI server" Mar 4 02:12:39.874081 containerd[1512]: time="2026-03-04T02:12:39.873736404Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 02:12:39.874081 containerd[1512]: time="2026-03-04T02:12:39.873944104Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 02:12:39.881537 containerd[1512]: time="2026-03-04T02:12:39.881146192Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 02:12:39.883550 containerd[1512]: time="2026-03-04T02:12:39.883332480Z" level=info msg="Start subscribing containerd event" Mar 4 02:12:39.883550 containerd[1512]: time="2026-03-04T02:12:39.883430770Z" level=info msg="Start recovering state" Mar 4 02:12:39.890150 containerd[1512]: time="2026-03-04T02:12:39.888675407Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 02:12:39.890521 containerd[1512]: time="2026-03-04T02:12:39.890492641Z" level=info msg="Start event monitor" Mar 4 02:12:39.890921 containerd[1512]: time="2026-03-04T02:12:39.890600638Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 02:12:39.890921 containerd[1512]: time="2026-03-04T02:12:39.890629686Z" level=info msg="Start snapshots syncer" Mar 4 02:12:39.890921 containerd[1512]: time="2026-03-04T02:12:39.890732167Z" level=info msg="Start cni network conf syncer for default" Mar 4 02:12:39.890921 containerd[1512]: time="2026-03-04T02:12:39.890750278Z" level=info msg="Start streaming server" Mar 4 02:12:39.911945 containerd[1512]: time="2026-03-04T02:12:39.911242628Z" level=info msg="containerd successfully booted in 0.229821s" Mar 4 02:12:39.911446 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 02:12:40.201978 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 02:12:40.407117 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 02:12:40.442878 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 02:12:40.481392 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 02:12:40.481741 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 02:12:40.495708 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 02:12:40.555221 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 02:12:40.565316 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 02:12:40.578245 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 02:12:40.579486 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 02:12:41.016658 tar[1501]: linux-amd64/README.md Mar 4 02:12:41.065516 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 02:12:41.853960 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 4 02:12:41.867974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:12:41.884026 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:12:42.538974 kubelet[1608]: E0304 02:12:42.538854 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:12:42.541885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:12:42.542159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:12:42.543625 systemd[1]: kubelet.service: Consumed 1.916s CPU time. Mar 4 02:12:45.656827 login[1597]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Mar 4 02:12:45.657756 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 4 02:12:45.681445 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 02:12:45.694124 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 02:12:45.699933 systemd-logind[1490]: New session 2 of user core. Mar 4 02:12:45.723079 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 02:12:45.731115 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 02:12:45.746905 (systemd)[1622]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 02:12:45.790319 coreos-metadata[1480]: Mar 04 02:12:45.790 WARN failed to locate config-drive, using the metadata service API instead Mar 4 02:12:45.821719 coreos-metadata[1480]: Mar 04 02:12:45.821 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 4 02:12:45.829637 coreos-metadata[1480]: Mar 04 02:12:45.829 INFO Fetch failed with 404: resource not found Mar 4 02:12:45.829934 coreos-metadata[1480]: Mar 04 02:12:45.829 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 4 02:12:45.830144 coreos-metadata[1480]: Mar 04 02:12:45.830 INFO Fetch successful Mar 4 02:12:45.830435 coreos-metadata[1480]: Mar 04 02:12:45.830 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 4 02:12:45.842859 coreos-metadata[1480]: Mar 04 02:12:45.842 INFO Fetch successful Mar 4 02:12:45.843177 coreos-metadata[1480]: Mar 04 02:12:45.843 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 4 02:12:45.854708 coreos-metadata[1480]: Mar 04 02:12:45.854 INFO Fetch successful Mar 4 02:12:45.854956 coreos-metadata[1480]: Mar 04 02:12:45.854 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 4 02:12:45.871041 coreos-metadata[1480]: Mar 04 02:12:45.870 INFO Fetch successful Mar 4 02:12:45.871215 coreos-metadata[1480]: Mar 04 02:12:45.871 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 4 02:12:45.895306 coreos-metadata[1480]: Mar 04 02:12:45.895 INFO Fetch successful Mar 4 02:12:45.913538 systemd[1622]: Queued start job for default target default.target. Mar 4 02:12:45.916695 systemd[1622]: Created slice app.slice - User Application Slice. Mar 4 02:12:45.916738 systemd[1622]: Reached target paths.target - Paths. Mar 4 02:12:45.916762 systemd[1622]: Reached target timers.target - Timers. Mar 4 02:12:45.921772 systemd[1622]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 02:12:45.927222 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 4 02:12:45.928547 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 02:12:45.937727 systemd[1622]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 02:12:45.938572 systemd[1622]: Reached target sockets.target - Sockets. Mar 4 02:12:45.938622 systemd[1622]: Reached target basic.target - Basic System. Mar 4 02:12:45.938713 systemd[1622]: Reached target default.target - Main User Target. Mar 4 02:12:45.938785 systemd[1622]: Startup finished in 181ms. Mar 4 02:12:45.939242 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 02:12:45.951930 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 02:12:46.660116 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 4 02:12:46.667030 systemd-logind[1490]: New session 1 of user core. Mar 4 02:12:46.679926 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 02:12:46.899618 coreos-metadata[1557]: Mar 04 02:12:46.897 WARN failed to locate config-drive, using the metadata service API instead Mar 4 02:12:46.922318 coreos-metadata[1557]: Mar 04 02:12:46.922 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 4 02:12:46.958970 coreos-metadata[1557]: Mar 04 02:12:46.958 INFO Fetch successful Mar 4 02:12:46.959350 coreos-metadata[1557]: Mar 04 02:12:46.959 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 4 02:12:46.991180 coreos-metadata[1557]: Mar 04 02:12:46.991 INFO Fetch successful Mar 4 02:12:46.994792 unknown[1557]: wrote ssh authorized keys file for user: core Mar 4 02:12:47.034164 update-ssh-keys[1659]: Updated "/home/core/.ssh/authorized_keys" Mar 4 02:12:47.035216 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 4 02:12:47.038803 systemd[1]: Finished sshkeys.service. Mar 4 02:12:47.042830 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 02:12:47.043419 systemd[1]: Startup finished in 1.900s (kernel) + 15.117s (initrd) + 13.204s (userspace) = 30.222s. Mar 4 02:12:48.422992 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 02:12:48.430026 systemd[1]: Started sshd@0-10.230.63.210:22-20.161.92.111:52354.service - OpenSSH per-connection server daemon (20.161.92.111:52354). Mar 4 02:12:49.050220 sshd[1664]: Accepted publickey for core from 20.161.92.111 port 52354 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:12:49.051268 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:12:49.058833 systemd-logind[1490]: New session 3 of user core. Mar 4 02:12:49.065815 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 02:12:49.567035 systemd[1]: Started sshd@1-10.230.63.210:22-20.161.92.111:42456.service - OpenSSH per-connection server daemon (20.161.92.111:42456). Mar 4 02:12:50.133732 sshd[1669]: Accepted publickey for core from 20.161.92.111 port 42456 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:12:50.134721 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:12:50.141530 systemd-logind[1490]: New session 4 of user core. Mar 4 02:12:50.151854 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 02:12:50.537850 sshd[1669]: pam_unix(sshd:session): session closed for user core Mar 4 02:12:50.543735 systemd[1]: sshd@1-10.230.63.210:22-20.161.92.111:42456.service: Deactivated successfully. Mar 4 02:12:50.546321 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 02:12:50.547423 systemd-logind[1490]: Session 4 logged out. Waiting for processes to exit. Mar 4 02:12:50.549253 systemd-logind[1490]: Removed session 4. Mar 4 02:12:50.658007 systemd[1]: Started sshd@2-10.230.63.210:22-20.161.92.111:42470.service - OpenSSH per-connection server daemon (20.161.92.111:42470). Mar 4 02:12:51.351950 sshd[1676]: Accepted publickey for core from 20.161.92.111 port 42470 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:12:51.354858 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:12:51.361814 systemd-logind[1490]: New session 5 of user core. Mar 4 02:12:51.370833 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 02:12:51.768787 sshd[1676]: pam_unix(sshd:session): session closed for user core Mar 4 02:12:51.774009 systemd[1]: sshd@2-10.230.63.210:22-20.161.92.111:42470.service: Deactivated successfully. Mar 4 02:12:51.776196 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 02:12:51.777171 systemd-logind[1490]: Session 5 logged out. Waiting for processes to exit. Mar 4 02:12:51.778723 systemd-logind[1490]: Removed session 5. Mar 4 02:12:51.888991 systemd[1]: Started sshd@3-10.230.63.210:22-20.161.92.111:42486.service - OpenSSH per-connection server daemon (20.161.92.111:42486). Mar 4 02:12:52.472310 sshd[1683]: Accepted publickey for core from 20.161.92.111 port 42486 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:12:52.474569 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:12:52.480864 systemd-logind[1490]: New session 6 of user core. Mar 4 02:12:52.493948 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 02:12:52.646443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 02:12:52.661148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:12:52.894002 sshd[1683]: pam_unix(sshd:session): session closed for user core Mar 4 02:12:52.901281 systemd[1]: sshd@3-10.230.63.210:22-20.161.92.111:42486.service: Deactivated successfully. Mar 4 02:12:52.904813 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 02:12:52.906542 systemd-logind[1490]: Session 6 logged out. Waiting for processes to exit. Mar 4 02:12:52.909072 systemd-logind[1490]: Removed session 6. Mar 4 02:12:53.002984 systemd[1]: Started sshd@4-10.230.63.210:22-20.161.92.111:42490.service - OpenSSH per-connection server daemon (20.161.92.111:42490). Mar 4 02:12:53.106830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:12:53.108294 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:12:53.172558 kubelet[1700]: E0304 02:12:53.172408 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:12:53.177430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:12:53.177947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:12:53.599639 sshd[1693]: Accepted publickey for core from 20.161.92.111 port 42490 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:12:53.601440 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:12:53.608386 systemd-logind[1490]: New session 7 of user core. Mar 4 02:12:53.620923 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 02:12:53.940663 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 02:12:53.941149 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:12:53.962145 sudo[1709]: pam_unix(sudo:session): session closed for user root Mar 4 02:12:54.057051 sshd[1693]: pam_unix(sshd:session): session closed for user core Mar 4 02:12:54.063145 systemd[1]: sshd@4-10.230.63.210:22-20.161.92.111:42490.service: Deactivated successfully. Mar 4 02:12:54.065990 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 02:12:54.067423 systemd-logind[1490]: Session 7 logged out. Waiting for processes to exit. Mar 4 02:12:54.068937 systemd-logind[1490]: Removed session 7. Mar 4 02:12:54.155913 systemd[1]: Started sshd@5-10.230.63.210:22-20.161.92.111:42494.service - OpenSSH per-connection server daemon (20.161.92.111:42494). Mar 4 02:12:54.735314 sshd[1714]: Accepted publickey for core from 20.161.92.111 port 42494 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:12:54.736366 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:12:54.742473 systemd-logind[1490]: New session 8 of user core. Mar 4 02:12:54.753819 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 02:12:55.052803 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 02:12:55.054202 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:12:55.060232 sudo[1718]: pam_unix(sudo:session): session closed for user root Mar 4 02:12:55.069165 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 02:12:55.069745 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:12:55.087142 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 02:12:55.103724 auditctl[1721]: No rules Mar 4 02:12:55.104818 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 02:12:55.105131 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 02:12:55.113142 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 02:12:55.153980 augenrules[1739]: No rules Mar 4 02:12:55.155702 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 02:12:55.157512 sudo[1717]: pam_unix(sudo:session): session closed for user root Mar 4 02:12:55.250205 sshd[1714]: pam_unix(sshd:session): session closed for user core Mar 4 02:12:55.256150 systemd[1]: sshd@5-10.230.63.210:22-20.161.92.111:42494.service: Deactivated successfully. Mar 4 02:12:55.258965 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 02:12:55.260237 systemd-logind[1490]: Session 8 logged out. Waiting for processes to exit. Mar 4 02:12:55.262466 systemd-logind[1490]: Removed session 8. Mar 4 02:12:55.358963 systemd[1]: Started sshd@6-10.230.63.210:22-20.161.92.111:42508.service - OpenSSH per-connection server daemon (20.161.92.111:42508). Mar 4 02:12:55.922655 sshd[1747]: Accepted publickey for core from 20.161.92.111 port 42508 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:12:55.924839 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:12:55.931657 systemd-logind[1490]: New session 9 of user core. Mar 4 02:12:55.940783 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 02:12:56.238101 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 02:12:56.238651 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:12:56.887989 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 02:12:56.888721 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 02:12:57.542309 dockerd[1765]: time="2026-03-04T02:12:57.540936110Z" level=info msg="Starting up" Mar 4 02:12:57.771641 dockerd[1765]: time="2026-03-04T02:12:57.771269324Z" level=info msg="Loading containers: start." Mar 4 02:12:57.958916 kernel: Initializing XFRM netlink socket Mar 4 02:12:58.002855 systemd-timesyncd[1409]: Network configuration changed, trying to establish connection. Mar 4 02:12:58.089197 systemd-networkd[1438]: docker0: Link UP Mar 4 02:12:58.109715 dockerd[1765]: time="2026-03-04T02:12:58.109637291Z" level=info msg="Loading containers: done." Mar 4 02:12:58.136936 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2700763946-merged.mount: Deactivated successfully. Mar 4 02:12:58.140950 dockerd[1765]: time="2026-03-04T02:12:58.140888482Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 02:12:58.141083 dockerd[1765]: time="2026-03-04T02:12:58.141039385Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 02:12:58.141264 dockerd[1765]: time="2026-03-04T02:12:58.141239210Z" level=info msg="Daemon has completed initialization" Mar 4 02:12:58.228028 dockerd[1765]: time="2026-03-04T02:12:58.226634816Z" level=info msg="API listen on /run/docker.sock" Mar 4 02:12:58.227287 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 02:12:59.344241 systemd-timesyncd[1409]: Contacted time server [2a02:6b67:d551:8fed::]:123 (2.flatcar.pool.ntp.org). Mar 4 02:12:59.344360 systemd-timesyncd[1409]: Initial clock synchronization to Wed 2026-03-04 02:12:59.341935 UTC. Mar 4 02:12:59.344928 systemd-resolved[1396]: Clock change detected. Flushing caches. Mar 4 02:13:00.000329 containerd[1512]: time="2026-03-04T02:13:00.000120514Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 4 02:13:00.820304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount720932163.mount: Deactivated successfully. Mar 4 02:13:03.019886 containerd[1512]: time="2026-03-04T02:13:03.019254380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:03.021863 containerd[1512]: time="2026-03-04T02:13:03.021079232Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696475" Mar 4 02:13:03.022713 containerd[1512]: time="2026-03-04T02:13:03.022675150Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:03.028803 containerd[1512]: time="2026-03-04T02:13:03.028764391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:03.032262 containerd[1512]: time="2026-03-04T02:13:03.032210517Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 3.031894174s" Mar 4 02:13:03.032526 containerd[1512]: time="2026-03-04T02:13:03.032492959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 4 02:13:03.035431 containerd[1512]: time="2026-03-04T02:13:03.035393327Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 4 02:13:04.497678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 02:13:04.509115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:13:04.993038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:13:05.005666 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:13:05.206701 kubelet[1973]: E0304 02:13:05.206570 1973 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:13:05.210802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:13:05.211091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:13:05.741065 containerd[1512]: time="2026-03-04T02:13:05.740909617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:05.743157 containerd[1512]: time="2026-03-04T02:13:05.742996658Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450708" Mar 4 02:13:05.744876 containerd[1512]: time="2026-03-04T02:13:05.744173997Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:05.749679 containerd[1512]: time="2026-03-04T02:13:05.749628338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:05.751580 containerd[1512]: time="2026-03-04T02:13:05.751508878Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 2.71589289s" Mar 4 02:13:05.751699 containerd[1512]: time="2026-03-04T02:13:05.751585343Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 4 02:13:05.753109 containerd[1512]: time="2026-03-04T02:13:05.753073992Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 4 02:13:11.022582 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 4 02:13:11.236878 containerd[1512]: time="2026-03-04T02:13:11.235235945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:11.237455 containerd[1512]: time="2026-03-04T02:13:11.236963469Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548437" Mar 4 02:13:11.238557 containerd[1512]: time="2026-03-04T02:13:11.238521692Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:11.243114 containerd[1512]: time="2026-03-04T02:13:11.243071720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:11.245325 containerd[1512]: time="2026-03-04T02:13:11.245281500Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 5.492054743s" Mar 4 02:13:11.245427 containerd[1512]: time="2026-03-04T02:13:11.245331814Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 4 02:13:11.247022 containerd[1512]: time="2026-03-04T02:13:11.246982552Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 4 02:13:14.352509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029384471.mount: Deactivated successfully. Mar 4 02:13:14.971726 containerd[1512]: time="2026-03-04T02:13:14.971638965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:14.974017 containerd[1512]: time="2026-03-04T02:13:14.973962722Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685320" Mar 4 02:13:14.975520 containerd[1512]: time="2026-03-04T02:13:14.975427190Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:14.983423 containerd[1512]: time="2026-03-04T02:13:14.983343972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:14.984466 containerd[1512]: time="2026-03-04T02:13:14.984421520Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 3.737268966s" Mar 4 02:13:14.984581 containerd[1512]: time="2026-03-04T02:13:14.984471155Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 4 02:13:14.985642 containerd[1512]: time="2026-03-04T02:13:14.985352415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 4 02:13:15.246903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 02:13:15.261965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:13:15.490810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:13:15.507141 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:13:15.570125 kubelet[2004]: E0304 02:13:15.570028 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:13:15.572688 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:13:15.572978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:13:16.123193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1628099949.mount: Deactivated successfully. Mar 4 02:13:18.577802 containerd[1512]: time="2026-03-04T02:13:18.577468838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:18.579181 containerd[1512]: time="2026-03-04T02:13:18.579133234Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556550" Mar 4 02:13:18.580774 containerd[1512]: time="2026-03-04T02:13:18.579990779Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:18.584487 containerd[1512]: time="2026-03-04T02:13:18.584450200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:18.586611 containerd[1512]: time="2026-03-04T02:13:18.586566333Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 3.601169475s" Mar 4 02:13:18.586780 containerd[1512]: time="2026-03-04T02:13:18.586749490Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 4 02:13:18.588900 containerd[1512]: time="2026-03-04T02:13:18.588869767Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 4 02:13:19.195861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081254247.mount: Deactivated successfully. Mar 4 02:13:19.203912 containerd[1512]: time="2026-03-04T02:13:19.203805182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:19.205952 containerd[1512]: time="2026-03-04T02:13:19.205829454Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 4 02:13:19.207086 containerd[1512]: time="2026-03-04T02:13:19.207021784Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:19.210068 containerd[1512]: time="2026-03-04T02:13:19.210005469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:19.212123 containerd[1512]: time="2026-03-04T02:13:19.211340350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 622.431097ms" Mar 4 02:13:19.212123 containerd[1512]: time="2026-03-04T02:13:19.211389924Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 4 02:13:19.212480 containerd[1512]: time="2026-03-04T02:13:19.212212630Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 4 02:13:19.581520 containerd[1512]: time="2026-03-04T02:13:19.581268184Z" level=error msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" failed" error="failed to pull and unpack image \"registry.k8s.io/etcd:3.6.6-0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-eu-west-1.s3.dualstack.eu-west-1.amazonaws.com/containers/images/sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\": read tcp [2a02:1348:179:8ff4:24:19ff:fee6:3fd2]:40114->[2a05:d030:8000:40::305:43fe]:443: read: connection reset by peer" Mar 4 02:13:19.581520 containerd[1512]: time="2026-03-04T02:13:19.581298923Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=4243" Mar 4 02:13:19.583013 containerd[1512]: time="2026-03-04T02:13:19.582117504Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 4 02:13:20.054011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460527472.mount: Deactivated successfully. Mar 4 02:13:22.146207 containerd[1512]: time="2026-03-04T02:13:22.146099544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:22.150377 containerd[1512]: time="2026-03-04T02:13:22.150311512Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23626288" Mar 4 02:13:22.152883 containerd[1512]: time="2026-03-04T02:13:22.151932852Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:22.334872 containerd[1512]: time="2026-03-04T02:13:22.333157637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:22.336334 containerd[1512]: time="2026-03-04T02:13:22.336280585Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.754126744s" Mar 4 02:13:22.336632 containerd[1512]: time="2026-03-04T02:13:22.336475734Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 4 02:13:24.212502 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:13:24.235264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:13:24.281351 systemd[1]: Reloading requested from client PID 2159 ('systemctl') (unit session-9.scope)... Mar 4 02:13:24.281726 systemd[1]: Reloading... Mar 4 02:13:24.484933 zram_generator::config[2198]: No configuration found. Mar 4 02:13:24.675055 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:13:24.791279 systemd[1]: Reloading finished in 508 ms. Mar 4 02:13:24.881251 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 02:13:24.881906 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:13:24.889270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:13:24.926438 update_engine[1491]: I20260304 02:13:24.924288 1491 update_attempter.cc:509] Updating boot flags... Mar 4 02:13:25.184638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2268) Mar 4 02:13:25.195101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:13:25.210307 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 02:13:25.300248 kubelet[2278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 02:13:25.566773 kubelet[2278]: I0304 02:13:25.566569 2278 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 4 02:13:25.566773 kubelet[2278]: I0304 02:13:25.566716 2278 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 02:13:25.567078 kubelet[2278]: I0304 02:13:25.566819 2278 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 02:13:25.567078 kubelet[2278]: I0304 02:13:25.566859 2278 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 02:13:25.567584 kubelet[2278]: I0304 02:13:25.567506 2278 server.go:951] "Client rotation is on, will bootstrap in background" Mar 4 02:13:25.586939 kubelet[2278]: E0304 02:13:25.586881 2278 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.63.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.63.210:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 02:13:25.588509 kubelet[2278]: I0304 02:13:25.587814 2278 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 02:13:25.595380 kubelet[2278]: E0304 02:13:25.594909 2278 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 02:13:25.595380 kubelet[2278]: I0304 02:13:25.594987 2278 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 02:13:25.606233 kubelet[2278]: I0304 02:13:25.606206 2278 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 02:13:25.609590 kubelet[2278]: I0304 02:13:25.609552 2278 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 02:13:25.611466 kubelet[2278]: I0304 02:13:25.609701 2278 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-323j1.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 02:13:25.612456 kubelet[2278]: I0304 02:13:25.611848 2278 topology_manager.go:143] "Creating topology manager with none policy" Mar 4 02:13:25.612456 kubelet[2278]: I0304 02:13:25.611875 2278 container_manager_linux.go:308] "Creating device plugin manager" Mar 4 02:13:25.612456 kubelet[2278]: I0304 02:13:25.612117 2278 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 02:13:25.613946 kubelet[2278]: I0304 02:13:25.613919 2278 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 4 02:13:25.614561 kubelet[2278]: I0304 02:13:25.614539 2278 kubelet.go:482] "Attempting to sync node with API server" Mar 4 02:13:25.614703 kubelet[2278]: I0304 02:13:25.614682 2278 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 02:13:25.614974 kubelet[2278]: I0304 02:13:25.614952 2278 kubelet.go:394] "Adding apiserver pod source" Mar 4 02:13:25.615148 kubelet[2278]: I0304 02:13:25.615115 2278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 02:13:25.618470 kubelet[2278]: I0304 02:13:25.618443 2278 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 02:13:25.621322 kubelet[2278]: I0304 02:13:25.621296 2278 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 02:13:25.621610 kubelet[2278]: I0304 02:13:25.621448 2278 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 02:13:25.623892 kubelet[2278]: W0304 02:13:25.622941 2278 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 02:13:25.630029 kubelet[2278]: I0304 02:13:25.630001 2278 server.go:1257] "Started kubelet" Mar 4 02:13:25.634655 kubelet[2278]: I0304 02:13:25.634626 2278 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 4 02:13:25.643722 kubelet[2278]: I0304 02:13:25.643666 2278 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 02:13:25.647904 kubelet[2278]: I0304 02:13:25.647873 2278 server.go:317] "Adding debug handlers to kubelet server" Mar 4 02:13:25.658851 kubelet[2278]: I0304 02:13:25.658748 2278 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 02:13:25.659040 kubelet[2278]: I0304 02:13:25.659013 2278 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 02:13:25.659467 kubelet[2278]: I0304 02:13:25.659444 2278 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 02:13:25.666058 kubelet[2278]: I0304 02:13:25.666030 2278 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 02:13:25.671127 kubelet[2278]: I0304 02:13:25.671104 2278 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 4 02:13:25.671597 kubelet[2278]: E0304 02:13:25.671559 2278 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-323j1.gb1.brightbox.com\" not found" Mar 4 02:13:25.673063 kubelet[2278]: I0304 02:13:25.673038 2278 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 02:13:25.673274 kubelet[2278]: I0304 02:13:25.673252 2278 reconciler.go:29] "Reconciler: start to sync state" Mar 4 02:13:25.674159 kubelet[2278]: E0304 02:13:25.674118 2278 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.63.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-323j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.63.210:6443: connect: connection refused" interval="200ms" Mar 4 02:13:25.677783 kubelet[2278]: E0304 02:13:25.674943 2278 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.63.210:6443/api/v1/namespaces/default/events\": dial tcp 10.230.63.210:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-323j1.gb1.brightbox.com.1899819513d6cb83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-323j1.gb1.brightbox.com,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-323j1.gb1.brightbox.com,},FirstTimestamp:2026-03-04 02:13:25.629954947 +0000 UTC m=+0.413453774,LastTimestamp:2026-03-04 02:13:25.629954947 +0000 UTC m=+0.413453774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-323j1.gb1.brightbox.com,}" Mar 4 02:13:25.678651 kubelet[2278]: I0304 02:13:25.678602 2278 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 02:13:25.678784 kubelet[2278]: I0304 02:13:25.678635 2278 factory.go:223] Registration of the systemd container factory successfully Mar 4 02:13:25.679131 kubelet[2278]: I0304 02:13:25.679101 2278 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 02:13:25.680809 kubelet[2278]: I0304 02:13:25.680780 2278 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 02:13:25.680926 kubelet[2278]: I0304 02:13:25.680828 2278 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 4 02:13:25.680979 kubelet[2278]: I0304 02:13:25.680937 2278 kubelet.go:2501] "Starting kubelet main sync loop" Mar 4 02:13:25.681160 kubelet[2278]: E0304 02:13:25.681059 2278 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 02:13:25.683831 kubelet[2278]: E0304 02:13:25.683796 2278 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 02:13:25.684471 kubelet[2278]: I0304 02:13:25.684444 2278 factory.go:223] Registration of the containerd container factory successfully Mar 4 02:13:25.764499 kubelet[2278]: I0304 02:13:25.764454 2278 cpu_manager.go:225] "Starting" policy="none" Mar 4 02:13:25.765413 kubelet[2278]: I0304 02:13:25.764919 2278 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 4 02:13:25.765413 kubelet[2278]: I0304 02:13:25.764975 2278 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 4 02:13:25.770505 kubelet[2278]: I0304 02:13:25.770016 2278 policy_none.go:50] "Start" Mar 4 02:13:25.770505 kubelet[2278]: I0304 02:13:25.770087 2278 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 02:13:25.770505 kubelet[2278]: I0304 02:13:25.770143 2278 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 02:13:25.771882 kubelet[2278]: I0304 02:13:25.771856 2278 policy_none.go:44] "Start" Mar 4 02:13:25.773285 kubelet[2278]: E0304 02:13:25.773224 2278 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"srv-323j1.gb1.brightbox.com\" not found" Mar 4 02:13:25.781562 kubelet[2278]: E0304 02:13:25.781508 2278 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 02:13:25.783472 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 4 02:13:25.809170 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 4 02:13:25.816568 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 4 02:13:25.828129 kubelet[2278]: E0304 02:13:25.827957 2278 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 02:13:25.829861 kubelet[2278]: I0304 02:13:25.828347 2278 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 4 02:13:25.829861 kubelet[2278]: I0304 02:13:25.828394 2278 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 02:13:25.829861 kubelet[2278]: I0304 02:13:25.829803 2278 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 4 02:13:25.833171 kubelet[2278]: E0304 02:13:25.833098 2278 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 02:13:25.833323 kubelet[2278]: E0304 02:13:25.833282 2278 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-323j1.gb1.brightbox.com\" not found" Mar 4 02:13:25.876423 kubelet[2278]: E0304 02:13:25.876324 2278 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.63.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-323j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.63.210:6443: connect: connection refused" interval="400ms" Mar 4 02:13:25.932905 kubelet[2278]: I0304 02:13:25.932647 2278 kubelet_node_status.go:74] "Attempting to register node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:25.933314 kubelet[2278]: E0304 02:13:25.933209 2278 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.63.210:6443/api/v1/nodes\": dial tcp 10.230.63.210:6443: connect: connection refused" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.000209 systemd[1]: Created slice kubepods-burstable-pod3632744946b84438e8707e26e4410908.slice - libcontainer container kubepods-burstable-pod3632744946b84438e8707e26e4410908.slice. Mar 4 02:13:26.013115 kubelet[2278]: E0304 02:13:26.013076 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.021310 systemd[1]: Created slice kubepods-burstable-podf7cc0df97477f74b8c27ba6487f8b250.slice - libcontainer container kubepods-burstable-podf7cc0df97477f74b8c27ba6487f8b250.slice. Mar 4 02:13:26.034920 kubelet[2278]: E0304 02:13:26.034204 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.038236 systemd[1]: Created slice kubepods-burstable-podc178748dcdc0a7b6d1b3b0a2f396fe48.slice - libcontainer container kubepods-burstable-podc178748dcdc0a7b6d1b3b0a2f396fe48.slice. Mar 4 02:13:26.041612 kubelet[2278]: E0304 02:13:26.041317 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076014 kubelet[2278]: I0304 02:13:26.075827 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3632744946b84438e8707e26e4410908-k8s-certs\") pod \"kube-apiserver-srv-323j1.gb1.brightbox.com\" (UID: \"3632744946b84438e8707e26e4410908\") " pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076014 kubelet[2278]: I0304 02:13:26.075933 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-ca-certs\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076361 kubelet[2278]: I0304 02:13:26.076047 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-flexvolume-dir\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076361 kubelet[2278]: I0304 02:13:26.076100 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-k8s-certs\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076361 kubelet[2278]: I0304 02:13:26.076156 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-kubeconfig\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076361 kubelet[2278]: I0304 02:13:26.076187 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c178748dcdc0a7b6d1b3b0a2f396fe48-kubeconfig\") pod \"kube-scheduler-srv-323j1.gb1.brightbox.com\" (UID: \"c178748dcdc0a7b6d1b3b0a2f396fe48\") " pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076361 kubelet[2278]: I0304 02:13:26.076243 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3632744946b84438e8707e26e4410908-ca-certs\") pod \"kube-apiserver-srv-323j1.gb1.brightbox.com\" (UID: \"3632744946b84438e8707e26e4410908\") " pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076668 kubelet[2278]: I0304 02:13:26.076308 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3632744946b84438e8707e26e4410908-usr-share-ca-certificates\") pod \"kube-apiserver-srv-323j1.gb1.brightbox.com\" (UID: \"3632744946b84438e8707e26e4410908\") " pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.076668 kubelet[2278]: I0304 02:13:26.076370 2278 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.137530 kubelet[2278]: I0304 02:13:26.136780 2278 kubelet_node_status.go:74] "Attempting to register node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.137530 kubelet[2278]: E0304 02:13:26.137261 2278 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.63.210:6443/api/v1/nodes\": dial tcp 10.230.63.210:6443: connect: connection refused" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.277330 kubelet[2278]: E0304 02:13:26.277252 2278 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.63.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-323j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.63.210:6443: connect: connection refused" interval="800ms" Mar 4 02:13:26.319094 containerd[1512]: time="2026-03-04T02:13:26.318964962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-323j1.gb1.brightbox.com,Uid:3632744946b84438e8707e26e4410908,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:26.337618 containerd[1512]: time="2026-03-04T02:13:26.337246204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-323j1.gb1.brightbox.com,Uid:f7cc0df97477f74b8c27ba6487f8b250,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:26.344854 containerd[1512]: time="2026-03-04T02:13:26.344402822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-323j1.gb1.brightbox.com,Uid:c178748dcdc0a7b6d1b3b0a2f396fe48,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:26.540273 kubelet[2278]: I0304 02:13:26.540216 2278 kubelet_node_status.go:74] "Attempting to register node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.541288 kubelet[2278]: E0304 02:13:26.540803 2278 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.63.210:6443/api/v1/nodes\": dial tcp 10.230.63.210:6443: connect: connection refused" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:26.996003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount641356841.mount: Deactivated successfully. Mar 4 02:13:27.030696 containerd[1512]: time="2026-03-04T02:13:27.030574953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:13:27.034803 containerd[1512]: time="2026-03-04T02:13:27.034737300Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:13:27.037309 containerd[1512]: time="2026-03-04T02:13:27.037245077Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 4 02:13:27.039787 containerd[1512]: time="2026-03-04T02:13:27.039717369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 02:13:27.043502 containerd[1512]: time="2026-03-04T02:13:27.042219519Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:13:27.043826 containerd[1512]: time="2026-03-04T02:13:27.043757397Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:13:27.044918 containerd[1512]: time="2026-03-04T02:13:27.044783720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 02:13:27.052867 containerd[1512]: time="2026-03-04T02:13:27.052154516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:13:27.055875 containerd[1512]: time="2026-03-04T02:13:27.055583881Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.304674ms" Mar 4 02:13:27.060741 containerd[1512]: time="2026-03-04T02:13:27.060054926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 722.733919ms" Mar 4 02:13:27.061385 containerd[1512]: time="2026-03-04T02:13:27.061189865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 716.674323ms" Mar 4 02:13:27.079234 kubelet[2278]: E0304 02:13:27.078821 2278 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.63.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-323j1.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.63.210:6443: connect: connection refused" interval="1.6s" Mar 4 02:13:27.347934 kubelet[2278]: I0304 02:13:27.347147 2278 kubelet_node_status.go:74] "Attempting to register node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:27.347934 kubelet[2278]: E0304 02:13:27.347707 2278 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.230.63.210:6443/api/v1/nodes\": dial tcp 10.230.63.210:6443: connect: connection refused" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:27.366054 containerd[1512]: time="2026-03-04T02:13:27.364740112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:13:27.366054 containerd[1512]: time="2026-03-04T02:13:27.364874679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:13:27.366054 containerd[1512]: time="2026-03-04T02:13:27.364903906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:27.367737 containerd[1512]: time="2026-03-04T02:13:27.367637922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:27.397975 containerd[1512]: time="2026-03-04T02:13:27.397659442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:13:27.397975 containerd[1512]: time="2026-03-04T02:13:27.397767732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:13:27.397975 containerd[1512]: time="2026-03-04T02:13:27.397810567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:27.398533 containerd[1512]: time="2026-03-04T02:13:27.398238529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:27.405158 containerd[1512]: time="2026-03-04T02:13:27.405044454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:13:27.405856 containerd[1512]: time="2026-03-04T02:13:27.405295393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:13:27.405856 containerd[1512]: time="2026-03-04T02:13:27.405493497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:27.413477 containerd[1512]: time="2026-03-04T02:13:27.411028415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:27.426127 systemd[1]: Started cri-containerd-af9f2bd550d67af24048a8769ec0507a291ef480f4f6e669b16477662b714882.scope - libcontainer container af9f2bd550d67af24048a8769ec0507a291ef480f4f6e669b16477662b714882. Mar 4 02:13:27.485134 systemd[1]: Started cri-containerd-93be8e31d67805e0e075f5fcee19e61fe79e4c52cf81c8ee437a94f4900ecf75.scope - libcontainer container 93be8e31d67805e0e075f5fcee19e61fe79e4c52cf81c8ee437a94f4900ecf75. Mar 4 02:13:27.491754 systemd[1]: Started cri-containerd-d5afb202a942708e56b8c07f2aded504befece612add43e3cdd5c5d3bd8cb0ed.scope - libcontainer container d5afb202a942708e56b8c07f2aded504befece612add43e3cdd5c5d3bd8cb0ed. Mar 4 02:13:27.587930 containerd[1512]: time="2026-03-04T02:13:27.587512797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-323j1.gb1.brightbox.com,Uid:3632744946b84438e8707e26e4410908,Namespace:kube-system,Attempt:0,} returns sandbox id \"af9f2bd550d67af24048a8769ec0507a291ef480f4f6e669b16477662b714882\"" Mar 4 02:13:27.616405 containerd[1512]: time="2026-03-04T02:13:27.616130811Z" level=info msg="CreateContainer within sandbox \"af9f2bd550d67af24048a8769ec0507a291ef480f4f6e669b16477662b714882\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 02:13:27.637707 containerd[1512]: time="2026-03-04T02:13:27.637631785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-323j1.gb1.brightbox.com,Uid:c178748dcdc0a7b6d1b3b0a2f396fe48,Namespace:kube-system,Attempt:0,} returns sandbox id \"93be8e31d67805e0e075f5fcee19e61fe79e4c52cf81c8ee437a94f4900ecf75\"" Mar 4 02:13:27.645309 containerd[1512]: time="2026-03-04T02:13:27.645261005Z" level=info msg="CreateContainer within sandbox \"93be8e31d67805e0e075f5fcee19e61fe79e4c52cf81c8ee437a94f4900ecf75\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 02:13:27.647409 containerd[1512]: time="2026-03-04T02:13:27.647362849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-323j1.gb1.brightbox.com,Uid:f7cc0df97477f74b8c27ba6487f8b250,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5afb202a942708e56b8c07f2aded504befece612add43e3cdd5c5d3bd8cb0ed\"" Mar 4 02:13:27.652746 containerd[1512]: time="2026-03-04T02:13:27.652631783Z" level=info msg="CreateContainer within sandbox \"af9f2bd550d67af24048a8769ec0507a291ef480f4f6e669b16477662b714882\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9fff3f170af7de0ee879de8931894b2f40fe8ba8aa56f75f581f3f43ccf55424\"" Mar 4 02:13:27.653360 containerd[1512]: time="2026-03-04T02:13:27.653315965Z" level=info msg="StartContainer for \"9fff3f170af7de0ee879de8931894b2f40fe8ba8aa56f75f581f3f43ccf55424\"" Mar 4 02:13:27.661505 containerd[1512]: time="2026-03-04T02:13:27.661462633Z" level=info msg="CreateContainer within sandbox \"d5afb202a942708e56b8c07f2aded504befece612add43e3cdd5c5d3bd8cb0ed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 02:13:27.684815 kubelet[2278]: E0304 02:13:27.684760 2278 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.63.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.63.210:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 02:13:27.693254 containerd[1512]: time="2026-03-04T02:13:27.693054588Z" level=info msg="CreateContainer within sandbox \"d5afb202a942708e56b8c07f2aded504befece612add43e3cdd5c5d3bd8cb0ed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8700ff2cc947188e80a9a3e4bc9194e74f818832f2d49c96065baf47c3d29f5a\"" Mar 4 02:13:27.694331 containerd[1512]: time="2026-03-04T02:13:27.694301870Z" level=info msg="StartContainer for \"8700ff2cc947188e80a9a3e4bc9194e74f818832f2d49c96065baf47c3d29f5a\"" Mar 4 02:13:27.695872 containerd[1512]: time="2026-03-04T02:13:27.695585642Z" level=info msg="CreateContainer within sandbox \"93be8e31d67805e0e075f5fcee19e61fe79e4c52cf81c8ee437a94f4900ecf75\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"35734e2f6347933f77fe174d50d950c1bb96f95d5b17336c88b0dcc265abaa7b\"" Mar 4 02:13:27.696652 containerd[1512]: time="2026-03-04T02:13:27.696615959Z" level=info msg="StartContainer for \"35734e2f6347933f77fe174d50d950c1bb96f95d5b17336c88b0dcc265abaa7b\"" Mar 4 02:13:27.707495 systemd[1]: Started cri-containerd-9fff3f170af7de0ee879de8931894b2f40fe8ba8aa56f75f581f3f43ccf55424.scope - libcontainer container 9fff3f170af7de0ee879de8931894b2f40fe8ba8aa56f75f581f3f43ccf55424. Mar 4 02:13:27.769063 systemd[1]: Started cri-containerd-8700ff2cc947188e80a9a3e4bc9194e74f818832f2d49c96065baf47c3d29f5a.scope - libcontainer container 8700ff2cc947188e80a9a3e4bc9194e74f818832f2d49c96065baf47c3d29f5a. Mar 4 02:13:27.783028 systemd[1]: Started cri-containerd-35734e2f6347933f77fe174d50d950c1bb96f95d5b17336c88b0dcc265abaa7b.scope - libcontainer container 35734e2f6347933f77fe174d50d950c1bb96f95d5b17336c88b0dcc265abaa7b. Mar 4 02:13:27.823615 containerd[1512]: time="2026-03-04T02:13:27.823212953Z" level=info msg="StartContainer for \"9fff3f170af7de0ee879de8931894b2f40fe8ba8aa56f75f581f3f43ccf55424\" returns successfully" Mar 4 02:13:27.912114 containerd[1512]: time="2026-03-04T02:13:27.910727956Z" level=info msg="StartContainer for \"35734e2f6347933f77fe174d50d950c1bb96f95d5b17336c88b0dcc265abaa7b\" returns successfully" Mar 4 02:13:27.916202 containerd[1512]: time="2026-03-04T02:13:27.916075861Z" level=info msg="StartContainer for \"8700ff2cc947188e80a9a3e4bc9194e74f818832f2d49c96065baf47c3d29f5a\" returns successfully" Mar 4 02:13:28.735235 kubelet[2278]: E0304 02:13:28.733375 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:28.736982 kubelet[2278]: E0304 02:13:28.736155 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:28.739577 kubelet[2278]: E0304 02:13:28.739339 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:28.952197 kubelet[2278]: I0304 02:13:28.952161 2278 kubelet_node_status.go:74] "Attempting to register node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:29.744239 kubelet[2278]: E0304 02:13:29.742789 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:29.746069 kubelet[2278]: E0304 02:13:29.745329 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:29.746552 kubelet[2278]: E0304 02:13:29.746380 2278 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.100105 kubelet[2278]: E0304 02:13:30.100043 2278 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-323j1.gb1.brightbox.com\" not found" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.193872 kubelet[2278]: I0304 02:13:30.193585 2278 kubelet_node_status.go:77] "Successfully registered node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.272538 kubelet[2278]: I0304 02:13:30.272475 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.280897 kubelet[2278]: E0304 02:13:30.280665 2278 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-323j1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.280897 kubelet[2278]: I0304 02:13:30.280707 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.283461 kubelet[2278]: E0304 02:13:30.283175 2278 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.283461 kubelet[2278]: I0304 02:13:30.283210 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.284984 kubelet[2278]: E0304 02:13:30.284930 2278 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-323j1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.620797 kubelet[2278]: I0304 02:13:30.620729 2278 apiserver.go:52] "Watching apiserver" Mar 4 02:13:30.673664 kubelet[2278]: I0304 02:13:30.673550 2278 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 02:13:30.741705 kubelet[2278]: I0304 02:13:30.740326 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.741705 kubelet[2278]: I0304 02:13:30.740917 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.741705 kubelet[2278]: I0304 02:13:30.741027 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.745274 kubelet[2278]: E0304 02:13:30.745238 2278 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-323j1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.746163 kubelet[2278]: E0304 02:13:30.746118 2278 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:30.746472 kubelet[2278]: E0304 02:13:30.746430 2278 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-323j1.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:31.745885 kubelet[2278]: I0304 02:13:31.744675 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:31.757868 kubelet[2278]: I0304 02:13:31.757346 2278 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:13:32.234146 kubelet[2278]: I0304 02:13:32.233229 2278 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:32.243003 kubelet[2278]: I0304 02:13:32.242419 2278 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:13:32.264077 systemd[1]: Reloading requested from client PID 2566 ('systemctl') (unit session-9.scope)... Mar 4 02:13:32.264121 systemd[1]: Reloading... Mar 4 02:13:32.396030 zram_generator::config[2605]: No configuration found. Mar 4 02:13:32.593479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:13:32.735771 systemd[1]: Reloading finished in 470 ms. Mar 4 02:13:32.811662 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:13:32.823477 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 02:13:32.824119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:13:32.824220 systemd[1]: kubelet.service: Consumed 1.068s CPU time, 120.5M memory peak, 0B memory swap peak. Mar 4 02:13:32.832788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:13:33.078078 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:13:33.088502 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 02:13:33.221928 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 02:13:33.243787 kubelet[2669]: I0304 02:13:33.242145 2669 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 4 02:13:33.243787 kubelet[2669]: I0304 02:13:33.242213 2669 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 02:13:33.243787 kubelet[2669]: I0304 02:13:33.242248 2669 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 4 02:13:33.243787 kubelet[2669]: I0304 02:13:33.242259 2669 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 02:13:33.246432 kubelet[2669]: I0304 02:13:33.244126 2669 server.go:951] "Client rotation is on, will bootstrap in background" Mar 4 02:13:33.247356 kubelet[2669]: I0304 02:13:33.246919 2669 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 02:13:33.252891 kubelet[2669]: I0304 02:13:33.250176 2669 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 02:13:33.264205 kubelet[2669]: E0304 02:13:33.263609 2669 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 02:13:33.264205 kubelet[2669]: I0304 02:13:33.263693 2669 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 4 02:13:33.275909 kubelet[2669]: I0304 02:13:33.273877 2669 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 4 02:13:33.276238 kubelet[2669]: I0304 02:13:33.276134 2669 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 02:13:33.276760 kubelet[2669]: I0304 02:13:33.276268 2669 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-323j1.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 4 02:13:33.276977 kubelet[2669]: I0304 02:13:33.276762 2669 topology_manager.go:143] "Creating topology manager with none policy" Mar 4 02:13:33.276977 kubelet[2669]: I0304 02:13:33.276806 2669 container_manager_linux.go:308] "Creating device plugin manager" Mar 4 02:13:33.276977 kubelet[2669]: I0304 02:13:33.276904 2669 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 4 02:13:33.278176 kubelet[2669]: I0304 02:13:33.277384 2669 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 4 02:13:33.278176 kubelet[2669]: I0304 02:13:33.277812 2669 kubelet.go:482] "Attempting to sync node with API server" Mar 4 02:13:33.279147 sudo[2683]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 4 02:13:33.281390 kubelet[2669]: I0304 02:13:33.279872 2669 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 02:13:33.281390 kubelet[2669]: I0304 02:13:33.279945 2669 kubelet.go:394] "Adding apiserver pod source" Mar 4 02:13:33.280591 sudo[2683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 4 02:13:33.283422 kubelet[2669]: I0304 02:13:33.281864 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 02:13:33.300193 kubelet[2669]: I0304 02:13:33.297540 2669 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 02:13:33.306675 kubelet[2669]: I0304 02:13:33.303042 2669 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 02:13:33.306675 kubelet[2669]: I0304 02:13:33.303089 2669 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 4 02:13:33.322220 kubelet[2669]: I0304 02:13:33.322182 2669 server.go:1257] "Started kubelet" Mar 4 02:13:33.323014 kubelet[2669]: I0304 02:13:33.322955 2669 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 02:13:33.323199 kubelet[2669]: I0304 02:13:33.323172 2669 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 4 02:13:33.323783 kubelet[2669]: I0304 02:13:33.323760 2669 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 02:13:33.325918 kubelet[2669]: I0304 02:13:33.323962 2669 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 02:13:33.326525 kubelet[2669]: I0304 02:13:33.326422 2669 server.go:317] "Adding debug handlers to kubelet server" Mar 4 02:13:33.338084 kubelet[2669]: I0304 02:13:33.337923 2669 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 4 02:13:33.347522 kubelet[2669]: I0304 02:13:33.347297 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 02:13:33.351865 kubelet[2669]: I0304 02:13:33.350551 2669 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 4 02:13:33.351865 kubelet[2669]: I0304 02:13:33.351563 2669 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 4 02:13:33.351865 kubelet[2669]: I0304 02:13:33.351770 2669 reconciler.go:29] "Reconciler: start to sync state" Mar 4 02:13:33.368942 kubelet[2669]: I0304 02:13:33.368340 2669 factory.go:223] Registration of the systemd container factory successfully Mar 4 02:13:33.368942 kubelet[2669]: I0304 02:13:33.368520 2669 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 02:13:33.378349 kubelet[2669]: I0304 02:13:33.377464 2669 factory.go:223] Registration of the containerd container factory successfully Mar 4 02:13:33.380179 kubelet[2669]: I0304 02:13:33.380143 2669 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 4 02:13:33.417100 kubelet[2669]: I0304 02:13:33.417030 2669 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 4 02:13:33.417100 kubelet[2669]: I0304 02:13:33.417080 2669 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 4 02:13:33.417100 kubelet[2669]: I0304 02:13:33.417111 2669 kubelet.go:2501] "Starting kubelet main sync loop" Mar 4 02:13:33.423317 kubelet[2669]: E0304 02:13:33.422801 2669 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 02:13:33.446084 kubelet[2669]: E0304 02:13:33.446045 2669 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 02:13:33.523071 kubelet[2669]: E0304 02:13:33.523032 2669 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 02:13:33.550793 kubelet[2669]: I0304 02:13:33.550754 2669 cpu_manager.go:225] "Starting" policy="none" Mar 4 02:13:33.551081 kubelet[2669]: I0304 02:13:33.551045 2669 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 4 02:13:33.551218 kubelet[2669]: I0304 02:13:33.551197 2669 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 4 02:13:33.551652 kubelet[2669]: I0304 02:13:33.551611 2669 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 4 02:13:33.551827 kubelet[2669]: I0304 02:13:33.551777 2669 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 4 02:13:33.552495 kubelet[2669]: I0304 02:13:33.551950 2669 policy_none.go:50] "Start" Mar 4 02:13:33.552495 kubelet[2669]: I0304 02:13:33.551975 2669 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 4 02:13:33.552495 kubelet[2669]: I0304 02:13:33.551996 2669 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 4 02:13:33.552495 kubelet[2669]: I0304 02:13:33.552216 2669 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 4 02:13:33.552495 kubelet[2669]: I0304 02:13:33.552246 2669 policy_none.go:44] "Start" Mar 4 02:13:33.562327 kubelet[2669]: E0304 02:13:33.562298 2669 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 02:13:33.563966 kubelet[2669]: I0304 02:13:33.563944 2669 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 4 02:13:33.565122 kubelet[2669]: I0304 02:13:33.564070 2669 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 02:13:33.565122 kubelet[2669]: I0304 02:13:33.564909 2669 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 4 02:13:33.570228 kubelet[2669]: E0304 02:13:33.570200 2669 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 02:13:33.697752 kubelet[2669]: I0304 02:13:33.697600 2669 kubelet_node_status.go:74] "Attempting to register node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.713450 kubelet[2669]: I0304 02:13:33.713420 2669 kubelet_node_status.go:123] "Node was previously registered" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.713791 kubelet[2669]: I0304 02:13:33.713755 2669 kubelet_node_status.go:77] "Successfully registered node" node="srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.726593 kubelet[2669]: I0304 02:13:33.726484 2669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.727203 kubelet[2669]: I0304 02:13:33.727088 2669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.740410 kubelet[2669]: I0304 02:13:33.738984 2669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.740410 kubelet[2669]: I0304 02:13:33.739509 2669 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:13:33.750195 kubelet[2669]: I0304 02:13:33.750164 2669 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:13:33.750431 kubelet[2669]: E0304 02:13:33.750404 2669 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-323j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.752166 kubelet[2669]: I0304 02:13:33.752124 2669 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:13:33.752307 kubelet[2669]: E0304 02:13:33.752283 2669 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-323j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.756250 kubelet[2669]: I0304 02:13:33.756213 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.756433 kubelet[2669]: I0304 02:13:33.756407 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3632744946b84438e8707e26e4410908-k8s-certs\") pod \"kube-apiserver-srv-323j1.gb1.brightbox.com\" (UID: \"3632744946b84438e8707e26e4410908\") " pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.756614 kubelet[2669]: I0304 02:13:33.756584 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3632744946b84438e8707e26e4410908-usr-share-ca-certificates\") pod \"kube-apiserver-srv-323j1.gb1.brightbox.com\" (UID: \"3632744946b84438e8707e26e4410908\") " pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.757051 kubelet[2669]: I0304 02:13:33.756787 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-k8s-certs\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.757051 kubelet[2669]: I0304 02:13:33.756850 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-kubeconfig\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.757051 kubelet[2669]: I0304 02:13:33.756883 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c178748dcdc0a7b6d1b3b0a2f396fe48-kubeconfig\") pod \"kube-scheduler-srv-323j1.gb1.brightbox.com\" (UID: \"c178748dcdc0a7b6d1b3b0a2f396fe48\") " pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.757051 kubelet[2669]: I0304 02:13:33.756913 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3632744946b84438e8707e26e4410908-ca-certs\") pod \"kube-apiserver-srv-323j1.gb1.brightbox.com\" (UID: \"3632744946b84438e8707e26e4410908\") " pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.757051 kubelet[2669]: I0304 02:13:33.756940 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-ca-certs\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:33.757314 kubelet[2669]: I0304 02:13:33.757013 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f7cc0df97477f74b8c27ba6487f8b250-flexvolume-dir\") pod \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" (UID: \"f7cc0df97477f74b8c27ba6487f8b250\") " pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:34.150691 sudo[2683]: pam_unix(sudo:session): session closed for user root Mar 4 02:13:34.297068 kubelet[2669]: I0304 02:13:34.296588 2669 apiserver.go:52] "Watching apiserver" Mar 4 02:13:34.352281 kubelet[2669]: I0304 02:13:34.352236 2669 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 4 02:13:34.484286 kubelet[2669]: I0304 02:13:34.483751 2669 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:34.499881 kubelet[2669]: I0304 02:13:34.499710 2669 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:13:34.501438 kubelet[2669]: E0304 02:13:34.501409 2669 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-323j1.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" Mar 4 02:13:34.549939 kubelet[2669]: I0304 02:13:34.546904 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-323j1.gb1.brightbox.com" podStartSLOduration=3.546869115 podStartE2EDuration="3.546869115s" podCreationTimestamp="2026-03-04 02:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:13:34.532762992 +0000 UTC m=+1.437555008" watchObservedRunningTime="2026-03-04 02:13:34.546869115 +0000 UTC m=+1.451661122" Mar 4 02:13:34.550494 kubelet[2669]: I0304 02:13:34.549996 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-323j1.gb1.brightbox.com" podStartSLOduration=1.549986356 podStartE2EDuration="1.549986356s" podCreationTimestamp="2026-03-04 02:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:13:34.546820039 +0000 UTC m=+1.451612058" watchObservedRunningTime="2026-03-04 02:13:34.549986356 +0000 UTC m=+1.454778370" Mar 4 02:13:34.605364 kubelet[2669]: I0304 02:13:34.604134 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-323j1.gb1.brightbox.com" podStartSLOduration=2.604115067 podStartE2EDuration="2.604115067s" podCreationTimestamp="2026-03-04 02:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:13:34.577600448 +0000 UTC m=+1.482392466" watchObservedRunningTime="2026-03-04 02:13:34.604115067 +0000 UTC m=+1.508907075" Mar 4 02:13:36.058379 sudo[1750]: pam_unix(sudo:session): session closed for user root Mar 4 02:13:36.152076 sshd[1747]: pam_unix(sshd:session): session closed for user core Mar 4 02:13:36.157535 systemd[1]: sshd@6-10.230.63.210:22-20.161.92.111:42508.service: Deactivated successfully. Mar 4 02:13:36.161527 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 02:13:36.162023 systemd[1]: session-9.scope: Consumed 5.080s CPU time, 154.8M memory peak, 0B memory swap peak. Mar 4 02:13:36.164506 systemd-logind[1490]: Session 9 logged out. Waiting for processes to exit. Mar 4 02:13:36.166604 systemd-logind[1490]: Removed session 9. Mar 4 02:13:38.077397 kubelet[2669]: I0304 02:13:38.077199 2669 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 02:13:38.078057 containerd[1512]: time="2026-03-04T02:13:38.077919412Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 02:13:38.078460 kubelet[2669]: I0304 02:13:38.078187 2669 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 02:13:39.016432 systemd[1]: Created slice kubepods-besteffort-podffbda6ac_8aca_4187_a92f_631782795497.slice - libcontainer container kubepods-besteffort-podffbda6ac_8aca_4187_a92f_631782795497.slice. Mar 4 02:13:39.055566 systemd[1]: Created slice kubepods-burstable-pod0b89d62d_ed62_44db_87a6_a787e04c7162.slice - libcontainer container kubepods-burstable-pod0b89d62d_ed62_44db_87a6_a787e04c7162.slice. Mar 4 02:13:39.077958 kubelet[2669]: E0304 02:13:39.077897 2669 reflector.go:204] "Failed to watch" err="failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:srv-323j1.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-323j1.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"hubble-server-certs\"" type="*v1.Secret" Mar 4 02:13:39.078589 kubelet[2669]: E0304 02:13:39.078001 2669 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:srv-323j1.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-323j1.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-config\"" type="*v1.ConfigMap" Mar 4 02:13:39.078589 kubelet[2669]: E0304 02:13:39.078136 2669 reflector.go:204] "Failed to watch" err="failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:srv-323j1.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-323j1.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"cilium-clustermesh\"" type="*v1.Secret" Mar 4 02:13:39.095132 kubelet[2669]: I0304 02:13:39.094612 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-bpf-maps\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095132 kubelet[2669]: I0304 02:13:39.094691 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-etc-cni-netd\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095132 kubelet[2669]: I0304 02:13:39.094726 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-xtables-lock\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095132 kubelet[2669]: I0304 02:13:39.094799 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b89d62d-ed62-44db-87a6-a787e04c7162-clustermesh-secrets\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095132 kubelet[2669]: I0304 02:13:39.094872 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffbda6ac-8aca-4187-a92f-631782795497-xtables-lock\") pod \"kube-proxy-gw9j5\" (UID: \"ffbda6ac-8aca-4187-a92f-631782795497\") " pod="kube-system/kube-proxy-gw9j5" Mar 4 02:13:39.095132 kubelet[2669]: I0304 02:13:39.094915 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-hostproc\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095624 kubelet[2669]: I0304 02:13:39.094977 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-cgroup\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095624 kubelet[2669]: I0304 02:13:39.095022 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-lib-modules\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095624 kubelet[2669]: I0304 02:13:39.095052 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-net\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095624 kubelet[2669]: I0304 02:13:39.095110 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffbda6ac-8aca-4187-a92f-631782795497-lib-modules\") pod \"kube-proxy-gw9j5\" (UID: \"ffbda6ac-8aca-4187-a92f-631782795497\") " pod="kube-system/kube-proxy-gw9j5" Mar 4 02:13:39.095624 kubelet[2669]: I0304 02:13:39.095159 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cni-path\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095624 kubelet[2669]: I0304 02:13:39.095190 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-config-path\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095924 kubelet[2669]: I0304 02:13:39.095217 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-kernel\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095924 kubelet[2669]: I0304 02:13:39.095241 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-hubble-tls\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095924 kubelet[2669]: I0304 02:13:39.095267 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph2xm\" (UniqueName: \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-kube-api-access-ph2xm\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.095924 kubelet[2669]: I0304 02:13:39.095296 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffbda6ac-8aca-4187-a92f-631782795497-kube-proxy\") pod \"kube-proxy-gw9j5\" (UID: \"ffbda6ac-8aca-4187-a92f-631782795497\") " pod="kube-system/kube-proxy-gw9j5" Mar 4 02:13:39.095924 kubelet[2669]: I0304 02:13:39.095326 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867c9\" (UniqueName: \"kubernetes.io/projected/ffbda6ac-8aca-4187-a92f-631782795497-kube-api-access-867c9\") pod \"kube-proxy-gw9j5\" (UID: \"ffbda6ac-8aca-4187-a92f-631782795497\") " pod="kube-system/kube-proxy-gw9j5" Mar 4 02:13:39.096147 kubelet[2669]: I0304 02:13:39.095394 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-run\") pod \"cilium-wjjsv\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " pod="kube-system/cilium-wjjsv" Mar 4 02:13:39.339923 containerd[1512]: time="2026-03-04T02:13:39.338164166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gw9j5,Uid:ffbda6ac-8aca-4187-a92f-631782795497,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:39.353365 systemd[1]: Created slice kubepods-besteffort-pod9e3f2bf0_b0dd_4fa0_88aa_46b101afe4b8.slice - libcontainer container kubepods-besteffort-pod9e3f2bf0_b0dd_4fa0_88aa_46b101afe4b8.slice. Mar 4 02:13:39.400919 kubelet[2669]: I0304 02:13:39.400231 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-cilium-config-path\") pod \"cilium-operator-78cf5644cb-7m7fv\" (UID: \"9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8\") " pod="kube-system/cilium-operator-78cf5644cb-7m7fv" Mar 4 02:13:39.400919 kubelet[2669]: I0304 02:13:39.400455 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqxl4\" (UniqueName: \"kubernetes.io/projected/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-kube-api-access-pqxl4\") pod \"cilium-operator-78cf5644cb-7m7fv\" (UID: \"9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8\") " pod="kube-system/cilium-operator-78cf5644cb-7m7fv" Mar 4 02:13:39.418051 containerd[1512]: time="2026-03-04T02:13:39.417578671Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:13:39.418051 containerd[1512]: time="2026-03-04T02:13:39.417915855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:13:39.418051 containerd[1512]: time="2026-03-04T02:13:39.417987537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:39.420858 containerd[1512]: time="2026-03-04T02:13:39.418434667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:39.475133 systemd[1]: Started cri-containerd-7b8d6e34ee285c370d508a86cd565ada3b4d5ceb518c29d03cd69c559a8b3419.scope - libcontainer container 7b8d6e34ee285c370d508a86cd565ada3b4d5ceb518c29d03cd69c559a8b3419. Mar 4 02:13:39.524665 containerd[1512]: time="2026-03-04T02:13:39.524052105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gw9j5,Uid:ffbda6ac-8aca-4187-a92f-631782795497,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b8d6e34ee285c370d508a86cd565ada3b4d5ceb518c29d03cd69c559a8b3419\"" Mar 4 02:13:39.534862 containerd[1512]: time="2026-03-04T02:13:39.534791141Z" level=info msg="CreateContainer within sandbox \"7b8d6e34ee285c370d508a86cd565ada3b4d5ceb518c29d03cd69c559a8b3419\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 02:13:39.558076 containerd[1512]: time="2026-03-04T02:13:39.558027371Z" level=info msg="CreateContainer within sandbox \"7b8d6e34ee285c370d508a86cd565ada3b4d5ceb518c29d03cd69c559a8b3419\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a95533415ecf564a00a22f4d5db7636f5e2439a5d3478252397aa9ca03611d3\"" Mar 4 02:13:39.561051 containerd[1512]: time="2026-03-04T02:13:39.559243219Z" level=info msg="StartContainer for \"2a95533415ecf564a00a22f4d5db7636f5e2439a5d3478252397aa9ca03611d3\"" Mar 4 02:13:39.602118 systemd[1]: Started cri-containerd-2a95533415ecf564a00a22f4d5db7636f5e2439a5d3478252397aa9ca03611d3.scope - libcontainer container 2a95533415ecf564a00a22f4d5db7636f5e2439a5d3478252397aa9ca03611d3. Mar 4 02:13:39.659725 containerd[1512]: time="2026-03-04T02:13:39.659631271Z" level=info msg="StartContainer for \"2a95533415ecf564a00a22f4d5db7636f5e2439a5d3478252397aa9ca03611d3\" returns successfully" Mar 4 02:13:40.199579 kubelet[2669]: E0304 02:13:40.198291 2669 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 4 02:13:40.199579 kubelet[2669]: E0304 02:13:40.198463 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-config-path podName:0b89d62d-ed62-44db-87a6-a787e04c7162 nodeName:}" failed. No retries permitted until 2026-03-04 02:13:40.698380712 +0000 UTC m=+7.603172712 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-config-path") pod "cilium-wjjsv" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162") : failed to sync configmap cache: timed out waiting for the condition Mar 4 02:13:40.199579 kubelet[2669]: E0304 02:13:40.198866 2669 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Mar 4 02:13:40.199579 kubelet[2669]: E0304 02:13:40.198947 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b89d62d-ed62-44db-87a6-a787e04c7162-clustermesh-secrets podName:0b89d62d-ed62-44db-87a6-a787e04c7162 nodeName:}" failed. No retries permitted until 2026-03-04 02:13:40.698920275 +0000 UTC m=+7.603712280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/0b89d62d-ed62-44db-87a6-a787e04c7162-clustermesh-secrets") pod "cilium-wjjsv" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162") : failed to sync secret cache: timed out waiting for the condition Mar 4 02:13:40.501510 kubelet[2669]: E0304 02:13:40.501388 2669 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Mar 4 02:13:40.501759 kubelet[2669]: E0304 02:13:40.501524 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-cilium-config-path podName:9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8 nodeName:}" failed. No retries permitted until 2026-03-04 02:13:41.001495293 +0000 UTC m=+7.906287296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-cilium-config-path") pod "cilium-operator-78cf5644cb-7m7fv" (UID: "9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8") : failed to sync configmap cache: timed out waiting for the condition Mar 4 02:13:40.864953 containerd[1512]: time="2026-03-04T02:13:40.864790492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjjsv,Uid:0b89d62d-ed62-44db-87a6-a787e04c7162,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:40.904750 containerd[1512]: time="2026-03-04T02:13:40.904576252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:13:40.905935 containerd[1512]: time="2026-03-04T02:13:40.905454409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:13:40.906947 containerd[1512]: time="2026-03-04T02:13:40.905618950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:40.906947 containerd[1512]: time="2026-03-04T02:13:40.905868159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:40.946186 systemd[1]: Started cri-containerd-2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86.scope - libcontainer container 2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86. Mar 4 02:13:40.985356 containerd[1512]: time="2026-03-04T02:13:40.985288054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wjjsv,Uid:0b89d62d-ed62-44db-87a6-a787e04c7162,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\"" Mar 4 02:13:40.988787 containerd[1512]: time="2026-03-04T02:13:40.988752063Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 4 02:13:41.161762 containerd[1512]: time="2026-03-04T02:13:41.161585136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-7m7fv,Uid:9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:41.201532 containerd[1512]: time="2026-03-04T02:13:41.201348444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:13:41.201532 containerd[1512]: time="2026-03-04T02:13:41.201483123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:13:41.201876 containerd[1512]: time="2026-03-04T02:13:41.201503496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:41.201876 containerd[1512]: time="2026-03-04T02:13:41.201663951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:13:41.234190 systemd[1]: Started cri-containerd-7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f.scope - libcontainer container 7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f. Mar 4 02:13:41.301690 containerd[1512]: time="2026-03-04T02:13:41.301357335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-7m7fv,Uid:9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f\"" Mar 4 02:13:46.980639 kubelet[2669]: I0304 02:13:46.979814 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-gw9j5" podStartSLOduration=8.979794111 podStartE2EDuration="8.979794111s" podCreationTimestamp="2026-03-04 02:13:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:13:40.525433558 +0000 UTC m=+7.430225603" watchObservedRunningTime="2026-03-04 02:13:46.979794111 +0000 UTC m=+13.884586111" Mar 4 02:13:50.877390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031349498.mount: Deactivated successfully. Mar 4 02:13:54.594398 containerd[1512]: time="2026-03-04T02:13:54.587452375Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 4 02:13:54.594398 containerd[1512]: time="2026-03-04T02:13:54.594126453Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:54.606261 containerd[1512]: time="2026-03-04T02:13:54.606223585Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 13.617418832s" Mar 4 02:13:54.606369 containerd[1512]: time="2026-03-04T02:13:54.606269673Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 4 02:13:54.616790 containerd[1512]: time="2026-03-04T02:13:54.616622605Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:54.619248 containerd[1512]: time="2026-03-04T02:13:54.619070849Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 4 02:13:54.624384 containerd[1512]: time="2026-03-04T02:13:54.624311155Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 02:13:54.693969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439847724.mount: Deactivated successfully. Mar 4 02:13:54.715162 containerd[1512]: time="2026-03-04T02:13:54.715113179Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\"" Mar 4 02:13:54.718107 containerd[1512]: time="2026-03-04T02:13:54.718071140Z" level=info msg="StartContainer for \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\"" Mar 4 02:13:54.842072 systemd[1]: Started cri-containerd-10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f.scope - libcontainer container 10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f. Mar 4 02:13:54.903255 containerd[1512]: time="2026-03-04T02:13:54.903111076Z" level=info msg="StartContainer for \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\" returns successfully" Mar 4 02:13:54.916146 systemd[1]: cri-containerd-10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f.scope: Deactivated successfully. Mar 4 02:13:55.119510 containerd[1512]: time="2026-03-04T02:13:55.109535892Z" level=info msg="shim disconnected" id=10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f namespace=k8s.io Mar 4 02:13:55.119510 containerd[1512]: time="2026-03-04T02:13:55.119217633Z" level=warning msg="cleaning up after shim disconnected" id=10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f namespace=k8s.io Mar 4 02:13:55.119510 containerd[1512]: time="2026-03-04T02:13:55.119252269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:13:55.632065 containerd[1512]: time="2026-03-04T02:13:55.631137548Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 02:13:55.649625 containerd[1512]: time="2026-03-04T02:13:55.649544462Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\"" Mar 4 02:13:55.651299 containerd[1512]: time="2026-03-04T02:13:55.651264356Z" level=info msg="StartContainer for \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\"" Mar 4 02:13:55.695132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f-rootfs.mount: Deactivated successfully. Mar 4 02:13:55.709088 systemd[1]: Started cri-containerd-fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7.scope - libcontainer container fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7. Mar 4 02:13:55.766276 containerd[1512]: time="2026-03-04T02:13:55.766075530Z" level=info msg="StartContainer for \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\" returns successfully" Mar 4 02:13:55.786159 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 02:13:55.786528 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:13:55.786723 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 4 02:13:55.794271 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 02:13:55.795071 systemd[1]: cri-containerd-fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7.scope: Deactivated successfully. Mar 4 02:13:55.831908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7-rootfs.mount: Deactivated successfully. Mar 4 02:13:55.835875 containerd[1512]: time="2026-03-04T02:13:55.835640551Z" level=info msg="shim disconnected" id=fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7 namespace=k8s.io Mar 4 02:13:55.835875 containerd[1512]: time="2026-03-04T02:13:55.835716385Z" level=warning msg="cleaning up after shim disconnected" id=fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7 namespace=k8s.io Mar 4 02:13:55.835875 containerd[1512]: time="2026-03-04T02:13:55.835732842Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:13:55.874230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:13:56.357701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478806886.mount: Deactivated successfully. Mar 4 02:13:56.623715 containerd[1512]: time="2026-03-04T02:13:56.623353229Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 02:13:56.707953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682677301.mount: Deactivated successfully. Mar 4 02:13:56.712156 containerd[1512]: time="2026-03-04T02:13:56.711243247Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\"" Mar 4 02:13:56.714768 containerd[1512]: time="2026-03-04T02:13:56.713520883Z" level=info msg="StartContainer for \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\"" Mar 4 02:13:56.792131 systemd[1]: Started cri-containerd-554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2.scope - libcontainer container 554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2. Mar 4 02:13:56.868545 containerd[1512]: time="2026-03-04T02:13:56.868490412Z" level=info msg="StartContainer for \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\" returns successfully" Mar 4 02:13:56.882291 systemd[1]: cri-containerd-554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2.scope: Deactivated successfully. Mar 4 02:13:56.929678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2-rootfs.mount: Deactivated successfully. Mar 4 02:13:57.003364 containerd[1512]: time="2026-03-04T02:13:57.002985033Z" level=info msg="shim disconnected" id=554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2 namespace=k8s.io Mar 4 02:13:57.003364 containerd[1512]: time="2026-03-04T02:13:57.003111228Z" level=warning msg="cleaning up after shim disconnected" id=554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2 namespace=k8s.io Mar 4 02:13:57.003364 containerd[1512]: time="2026-03-04T02:13:57.003131285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:13:57.430702 containerd[1512]: time="2026-03-04T02:13:57.430648703Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:57.434024 containerd[1512]: time="2026-03-04T02:13:57.433965468Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 4 02:13:57.435198 containerd[1512]: time="2026-03-04T02:13:57.435132099Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:13:57.437768 containerd[1512]: time="2026-03-04T02:13:57.437712083Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.818582725s" Mar 4 02:13:57.438381 containerd[1512]: time="2026-03-04T02:13:57.438251806Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 4 02:13:57.446223 containerd[1512]: time="2026-03-04T02:13:57.446173483Z" level=info msg="CreateContainer within sandbox \"7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 4 02:13:57.471131 containerd[1512]: time="2026-03-04T02:13:57.471060186Z" level=info msg="CreateContainer within sandbox \"7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\"" Mar 4 02:13:57.472898 containerd[1512]: time="2026-03-04T02:13:57.472354407Z" level=info msg="StartContainer for \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\"" Mar 4 02:13:57.520083 systemd[1]: Started cri-containerd-3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0.scope - libcontainer container 3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0. Mar 4 02:13:57.570529 containerd[1512]: time="2026-03-04T02:13:57.570186393Z" level=info msg="StartContainer for \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\" returns successfully" Mar 4 02:13:57.644942 containerd[1512]: time="2026-03-04T02:13:57.644619367Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 02:13:57.656917 kubelet[2669]: I0304 02:13:57.655680 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-7m7fv" podStartSLOduration=2.5194650469999997 podStartE2EDuration="18.655616395s" podCreationTimestamp="2026-03-04 02:13:39 +0000 UTC" firstStartedPulling="2026-03-04 02:13:41.303891153 +0000 UTC m=+8.208683158" lastFinishedPulling="2026-03-04 02:13:57.440042506 +0000 UTC m=+24.344834506" observedRunningTime="2026-03-04 02:13:57.654996021 +0000 UTC m=+24.559788041" watchObservedRunningTime="2026-03-04 02:13:57.655616395 +0000 UTC m=+24.560408413" Mar 4 02:13:57.679291 containerd[1512]: time="2026-03-04T02:13:57.679119325Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\"" Mar 4 02:13:57.681104 containerd[1512]: time="2026-03-04T02:13:57.679911028Z" level=info msg="StartContainer for \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\"" Mar 4 02:13:57.751526 systemd[1]: Started cri-containerd-fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384.scope - libcontainer container fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384. Mar 4 02:13:57.819375 systemd[1]: cri-containerd-fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384.scope: Deactivated successfully. Mar 4 02:13:57.823978 containerd[1512]: time="2026-03-04T02:13:57.823884378Z" level=info msg="StartContainer for \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\" returns successfully" Mar 4 02:13:57.861109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384-rootfs.mount: Deactivated successfully. Mar 4 02:13:57.938223 containerd[1512]: time="2026-03-04T02:13:57.938040139Z" level=info msg="shim disconnected" id=fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384 namespace=k8s.io Mar 4 02:13:57.938223 containerd[1512]: time="2026-03-04T02:13:57.938125973Z" level=warning msg="cleaning up after shim disconnected" id=fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384 namespace=k8s.io Mar 4 02:13:57.938223 containerd[1512]: time="2026-03-04T02:13:57.938146422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:13:58.652504 containerd[1512]: time="2026-03-04T02:13:58.652317158Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 02:13:58.682257 containerd[1512]: time="2026-03-04T02:13:58.682196151Z" level=info msg="CreateContainer within sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\"" Mar 4 02:13:58.684908 containerd[1512]: time="2026-03-04T02:13:58.683923369Z" level=info msg="StartContainer for \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\"" Mar 4 02:13:58.765009 systemd[1]: run-containerd-runc-k8s.io-b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6-runc.gDdgXH.mount: Deactivated successfully. Mar 4 02:13:58.780055 systemd[1]: Started cri-containerd-b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6.scope - libcontainer container b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6. Mar 4 02:13:58.881648 containerd[1512]: time="2026-03-04T02:13:58.881587877Z" level=info msg="StartContainer for \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\" returns successfully" Mar 4 02:13:59.182405 kubelet[2669]: I0304 02:13:59.182364 2669 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 4 02:13:59.251936 systemd[1]: Created slice kubepods-burstable-pode814a6ae_9042_408e_b36b_8571b8d87f98.slice - libcontainer container kubepods-burstable-pode814a6ae_9042_408e_b36b_8571b8d87f98.slice. Mar 4 02:13:59.288697 systemd[1]: Created slice kubepods-burstable-pode92141af_0175_44e0_b127_86d9845e3410.slice - libcontainer container kubepods-burstable-pode92141af_0175_44e0_b127_86d9845e3410.slice. Mar 4 02:13:59.346310 kubelet[2669]: I0304 02:13:59.346245 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pk8f\" (UniqueName: \"kubernetes.io/projected/e814a6ae-9042-408e-b36b-8571b8d87f98-kube-api-access-9pk8f\") pod \"coredns-7d764666f9-p4xdv\" (UID: \"e814a6ae-9042-408e-b36b-8571b8d87f98\") " pod="kube-system/coredns-7d764666f9-p4xdv" Mar 4 02:13:59.346869 kubelet[2669]: I0304 02:13:59.346327 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e814a6ae-9042-408e-b36b-8571b8d87f98-config-volume\") pod \"coredns-7d764666f9-p4xdv\" (UID: \"e814a6ae-9042-408e-b36b-8571b8d87f98\") " pod="kube-system/coredns-7d764666f9-p4xdv" Mar 4 02:13:59.346869 kubelet[2669]: I0304 02:13:59.346401 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92141af-0175-44e0-b127-86d9845e3410-config-volume\") pod \"coredns-7d764666f9-vxk9h\" (UID: \"e92141af-0175-44e0-b127-86d9845e3410\") " pod="kube-system/coredns-7d764666f9-vxk9h" Mar 4 02:13:59.346869 kubelet[2669]: I0304 02:13:59.346434 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffq9g\" (UniqueName: \"kubernetes.io/projected/e92141af-0175-44e0-b127-86d9845e3410-kube-api-access-ffq9g\") pod \"coredns-7d764666f9-vxk9h\" (UID: \"e92141af-0175-44e0-b127-86d9845e3410\") " pod="kube-system/coredns-7d764666f9-vxk9h" Mar 4 02:13:59.583714 containerd[1512]: time="2026-03-04T02:13:59.582797202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-p4xdv,Uid:e814a6ae-9042-408e-b36b-8571b8d87f98,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:59.601963 containerd[1512]: time="2026-03-04T02:13:59.601453107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vxk9h,Uid:e92141af-0175-44e0-b127-86d9845e3410,Namespace:kube-system,Attempt:0,}" Mar 4 02:13:59.714290 systemd[1]: run-containerd-runc-k8s.io-b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6-runc.OcJPg1.mount: Deactivated successfully. Mar 4 02:14:01.894470 systemd-networkd[1438]: cilium_host: Link UP Mar 4 02:14:01.894799 systemd-networkd[1438]: cilium_net: Link UP Mar 4 02:14:01.897817 systemd-networkd[1438]: cilium_net: Gained carrier Mar 4 02:14:01.899198 systemd-networkd[1438]: cilium_host: Gained carrier Mar 4 02:14:01.899599 systemd-networkd[1438]: cilium_net: Gained IPv6LL Mar 4 02:14:01.899952 systemd-networkd[1438]: cilium_host: Gained IPv6LL Mar 4 02:14:02.101771 systemd-networkd[1438]: cilium_vxlan: Link UP Mar 4 02:14:02.101792 systemd-networkd[1438]: cilium_vxlan: Gained carrier Mar 4 02:14:02.721908 kernel: NET: Registered PF_ALG protocol family Mar 4 02:14:03.849171 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Mar 4 02:14:03.871225 systemd-networkd[1438]: lxc_health: Link UP Mar 4 02:14:03.884024 systemd-networkd[1438]: lxc_health: Gained carrier Mar 4 02:14:04.258163 systemd-networkd[1438]: lxc3750268a4ae6: Link UP Mar 4 02:14:04.269026 systemd-networkd[1438]: lxc20c6b2a0ea0f: Link UP Mar 4 02:14:04.294889 kernel: eth0: renamed from tmp78885 Mar 4 02:14:04.313012 systemd-networkd[1438]: lxc3750268a4ae6: Gained carrier Mar 4 02:14:04.318876 kernel: eth0: renamed from tmpff44f Mar 4 02:14:04.326929 systemd-networkd[1438]: lxc20c6b2a0ea0f: Gained carrier Mar 4 02:14:04.900628 kubelet[2669]: I0304 02:14:04.900060 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-wjjsv" podStartSLOduration=9.242435381 podStartE2EDuration="26.900016769s" podCreationTimestamp="2026-03-04 02:13:38 +0000 UTC" firstStartedPulling="2026-03-04 02:13:40.988297405 +0000 UTC m=+7.893089416" lastFinishedPulling="2026-03-04 02:13:58.645878793 +0000 UTC m=+25.550670804" observedRunningTime="2026-03-04 02:13:59.743425941 +0000 UTC m=+26.648217998" watchObservedRunningTime="2026-03-04 02:14:04.900016769 +0000 UTC m=+31.804808783" Mar 4 02:14:05.191149 systemd-networkd[1438]: lxc_health: Gained IPv6LL Mar 4 02:14:05.511183 systemd-networkd[1438]: lxc20c6b2a0ea0f: Gained IPv6LL Mar 4 02:14:06.215134 systemd-networkd[1438]: lxc3750268a4ae6: Gained IPv6LL Mar 4 02:14:10.350947 containerd[1512]: time="2026-03-04T02:14:10.349967430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:14:10.350947 containerd[1512]: time="2026-03-04T02:14:10.350151210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:14:10.350947 containerd[1512]: time="2026-03-04T02:14:10.350186634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:14:10.354921 containerd[1512]: time="2026-03-04T02:14:10.351988896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:14:10.421259 systemd[1]: run-containerd-runc-k8s.io-ff44fc4e7c28e150fcff727296e6a778f705cd9c84e20b8a3f49ea3ddae766fb-runc.ynVSjn.mount: Deactivated successfully. Mar 4 02:14:10.434118 systemd[1]: Started cri-containerd-ff44fc4e7c28e150fcff727296e6a778f705cd9c84e20b8a3f49ea3ddae766fb.scope - libcontainer container ff44fc4e7c28e150fcff727296e6a778f705cd9c84e20b8a3f49ea3ddae766fb. Mar 4 02:14:10.452507 containerd[1512]: time="2026-03-04T02:14:10.452309985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:14:10.452507 containerd[1512]: time="2026-03-04T02:14:10.452408369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:14:10.452507 containerd[1512]: time="2026-03-04T02:14:10.452429559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:14:10.453498 containerd[1512]: time="2026-03-04T02:14:10.452567662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:14:10.517120 systemd[1]: Started cri-containerd-788859fb10e590a4e2a1c43dbc10b6c2fe1c6064f6e501914d52ca569b539c7d.scope - libcontainer container 788859fb10e590a4e2a1c43dbc10b6c2fe1c6064f6e501914d52ca569b539c7d. Mar 4 02:14:10.571610 containerd[1512]: time="2026-03-04T02:14:10.571325081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-p4xdv,Uid:e814a6ae-9042-408e-b36b-8571b8d87f98,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff44fc4e7c28e150fcff727296e6a778f705cd9c84e20b8a3f49ea3ddae766fb\"" Mar 4 02:14:10.588727 containerd[1512]: time="2026-03-04T02:14:10.588298885Z" level=info msg="CreateContainer within sandbox \"ff44fc4e7c28e150fcff727296e6a778f705cd9c84e20b8a3f49ea3ddae766fb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 02:14:10.654966 containerd[1512]: time="2026-03-04T02:14:10.654216045Z" level=info msg="CreateContainer within sandbox \"ff44fc4e7c28e150fcff727296e6a778f705cd9c84e20b8a3f49ea3ddae766fb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1657193c666d8cd67c38206c990d87ff34e15e551d19c11f53d7c8b4533c7c13\"" Mar 4 02:14:10.659196 containerd[1512]: time="2026-03-04T02:14:10.659121500Z" level=info msg="StartContainer for \"1657193c666d8cd67c38206c990d87ff34e15e551d19c11f53d7c8b4533c7c13\"" Mar 4 02:14:10.683013 containerd[1512]: time="2026-03-04T02:14:10.682219678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vxk9h,Uid:e92141af-0175-44e0-b127-86d9845e3410,Namespace:kube-system,Attempt:0,} returns sandbox id \"788859fb10e590a4e2a1c43dbc10b6c2fe1c6064f6e501914d52ca569b539c7d\"" Mar 4 02:14:10.708803 containerd[1512]: time="2026-03-04T02:14:10.708274228Z" level=info msg="CreateContainer within sandbox \"788859fb10e590a4e2a1c43dbc10b6c2fe1c6064f6e501914d52ca569b539c7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 02:14:10.780347 containerd[1512]: time="2026-03-04T02:14:10.779667474Z" level=info msg="CreateContainer within sandbox \"788859fb10e590a4e2a1c43dbc10b6c2fe1c6064f6e501914d52ca569b539c7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a4a2f1cfd4e2b757198ada41669ab998b90957c86bc1c8512684accf8c2a6f3\"" Mar 4 02:14:10.780087 systemd[1]: Started cri-containerd-1657193c666d8cd67c38206c990d87ff34e15e551d19c11f53d7c8b4533c7c13.scope - libcontainer container 1657193c666d8cd67c38206c990d87ff34e15e551d19c11f53d7c8b4533c7c13. Mar 4 02:14:10.782989 containerd[1512]: time="2026-03-04T02:14:10.781353976Z" level=info msg="StartContainer for \"4a4a2f1cfd4e2b757198ada41669ab998b90957c86bc1c8512684accf8c2a6f3\"" Mar 4 02:14:10.833102 systemd[1]: Started cri-containerd-4a4a2f1cfd4e2b757198ada41669ab998b90957c86bc1c8512684accf8c2a6f3.scope - libcontainer container 4a4a2f1cfd4e2b757198ada41669ab998b90957c86bc1c8512684accf8c2a6f3. Mar 4 02:14:10.854633 containerd[1512]: time="2026-03-04T02:14:10.854351959Z" level=info msg="StartContainer for \"1657193c666d8cd67c38206c990d87ff34e15e551d19c11f53d7c8b4533c7c13\" returns successfully" Mar 4 02:14:10.887386 containerd[1512]: time="2026-03-04T02:14:10.887306396Z" level=info msg="StartContainer for \"4a4a2f1cfd4e2b757198ada41669ab998b90957c86bc1c8512684accf8c2a6f3\" returns successfully" Mar 4 02:14:11.784738 kubelet[2669]: I0304 02:14:11.784638 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-vxk9h" podStartSLOduration=32.784619686 podStartE2EDuration="32.784619686s" podCreationTimestamp="2026-03-04 02:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:14:11.7824946 +0000 UTC m=+38.687286648" watchObservedRunningTime="2026-03-04 02:14:11.784619686 +0000 UTC m=+38.689411698" Mar 4 02:14:14.361398 systemd[1]: Started sshd@7-10.230.63.210:22-181.115.147.5:41412.service - OpenSSH per-connection server daemon (181.115.147.5:41412). Mar 4 02:14:15.568364 sshd[4054]: Received disconnect from 181.115.147.5 port 41412:11: Bye Bye [preauth] Mar 4 02:14:15.568364 sshd[4054]: Disconnected from authenticating user root 181.115.147.5 port 41412 [preauth] Mar 4 02:14:15.571740 systemd[1]: sshd@7-10.230.63.210:22-181.115.147.5:41412.service: Deactivated successfully. Mar 4 02:14:28.030268 systemd[1]: Started sshd@8-10.230.63.210:22-45.78.206.111:35728.service - OpenSSH per-connection server daemon (45.78.206.111:35728). Mar 4 02:14:32.493930 sshd[4060]: Received disconnect from 45.78.206.111 port 35728:11: Bye Bye [preauth] Mar 4 02:14:32.493930 sshd[4060]: Disconnected from authenticating user root 45.78.206.111 port 35728 [preauth] Mar 4 02:14:32.496743 systemd[1]: sshd@8-10.230.63.210:22-45.78.206.111:35728.service: Deactivated successfully. Mar 4 02:14:39.901205 systemd[1]: Started sshd@9-10.230.63.210:22-20.161.92.111:37734.service - OpenSSH per-connection server daemon (20.161.92.111:37734). Mar 4 02:14:40.545189 sshd[4067]: Accepted publickey for core from 20.161.92.111 port 37734 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:14:40.548325 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:14:40.557892 systemd-logind[1490]: New session 10 of user core. Mar 4 02:14:40.569093 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 02:14:41.571518 sshd[4067]: pam_unix(sshd:session): session closed for user core Mar 4 02:14:41.579644 systemd[1]: sshd@9-10.230.63.210:22-20.161.92.111:37734.service: Deactivated successfully. Mar 4 02:14:41.583295 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 02:14:41.585225 systemd-logind[1490]: Session 10 logged out. Waiting for processes to exit. Mar 4 02:14:41.589368 systemd-logind[1490]: Removed session 10. Mar 4 02:14:46.685211 systemd[1]: Started sshd@10-10.230.63.210:22-20.161.92.111:47014.service - OpenSSH per-connection server daemon (20.161.92.111:47014). Mar 4 02:14:47.318641 sshd[4083]: Accepted publickey for core from 20.161.92.111 port 47014 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:14:47.321950 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:14:47.331017 systemd-logind[1490]: New session 11 of user core. Mar 4 02:14:47.343145 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 02:14:47.833593 sshd[4083]: pam_unix(sshd:session): session closed for user core Mar 4 02:14:47.840599 systemd[1]: sshd@10-10.230.63.210:22-20.161.92.111:47014.service: Deactivated successfully. Mar 4 02:14:47.844767 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 02:14:47.846184 systemd-logind[1490]: Session 11 logged out. Waiting for processes to exit. Mar 4 02:14:47.848100 systemd-logind[1490]: Removed session 11. Mar 4 02:14:52.938231 systemd[1]: Started sshd@11-10.230.63.210:22-20.161.92.111:42690.service - OpenSSH per-connection server daemon (20.161.92.111:42690). Mar 4 02:14:53.514887 sshd[4097]: Accepted publickey for core from 20.161.92.111 port 42690 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:14:53.517695 sshd[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:14:53.526361 systemd-logind[1490]: New session 12 of user core. Mar 4 02:14:53.540116 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 02:14:54.020759 sshd[4097]: pam_unix(sshd:session): session closed for user core Mar 4 02:14:54.026342 systemd[1]: sshd@11-10.230.63.210:22-20.161.92.111:42690.service: Deactivated successfully. Mar 4 02:14:54.029133 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 02:14:54.031755 systemd-logind[1490]: Session 12 logged out. Waiting for processes to exit. Mar 4 02:14:54.033805 systemd-logind[1490]: Removed session 12. Mar 4 02:14:59.123758 systemd[1]: Started sshd@12-10.230.63.210:22-20.161.92.111:42696.service - OpenSSH per-connection server daemon (20.161.92.111:42696). Mar 4 02:14:59.711791 sshd[4111]: Accepted publickey for core from 20.161.92.111 port 42696 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:14:59.714450 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:14:59.723301 systemd-logind[1490]: New session 13 of user core. Mar 4 02:14:59.734060 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 02:15:00.248425 sshd[4111]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:00.256585 systemd[1]: sshd@12-10.230.63.210:22-20.161.92.111:42696.service: Deactivated successfully. Mar 4 02:15:00.259260 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 02:15:00.260343 systemd-logind[1490]: Session 13 logged out. Waiting for processes to exit. Mar 4 02:15:00.262175 systemd-logind[1490]: Removed session 13. Mar 4 02:15:00.363247 systemd[1]: Started sshd@13-10.230.63.210:22-20.161.92.111:59676.service - OpenSSH per-connection server daemon (20.161.92.111:59676). Mar 4 02:15:00.955415 sshd[4124]: Accepted publickey for core from 20.161.92.111 port 59676 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:00.957976 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:00.966299 systemd-logind[1490]: New session 14 of user core. Mar 4 02:15:00.973161 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 02:15:01.588091 sshd[4124]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:01.598276 systemd-logind[1490]: Session 14 logged out. Waiting for processes to exit. Mar 4 02:15:01.599500 systemd[1]: sshd@13-10.230.63.210:22-20.161.92.111:59676.service: Deactivated successfully. Mar 4 02:15:01.604571 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 02:15:01.607351 systemd-logind[1490]: Removed session 14. Mar 4 02:15:01.685604 systemd[1]: Started sshd@14-10.230.63.210:22-20.161.92.111:59682.service - OpenSSH per-connection server daemon (20.161.92.111:59682). Mar 4 02:15:02.258267 sshd[4135]: Accepted publickey for core from 20.161.92.111 port 59682 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:02.260529 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:02.268663 systemd-logind[1490]: New session 15 of user core. Mar 4 02:15:02.277165 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 02:15:02.769387 sshd[4135]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:02.774709 systemd-logind[1490]: Session 15 logged out. Waiting for processes to exit. Mar 4 02:15:02.775671 systemd[1]: sshd@14-10.230.63.210:22-20.161.92.111:59682.service: Deactivated successfully. Mar 4 02:15:02.779110 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 02:15:02.781008 systemd-logind[1490]: Removed session 15. Mar 4 02:15:07.891283 systemd[1]: Started sshd@15-10.230.63.210:22-20.161.92.111:59696.service - OpenSSH per-connection server daemon (20.161.92.111:59696). Mar 4 02:15:08.567415 sshd[4149]: Accepted publickey for core from 20.161.92.111 port 59696 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:08.569772 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:08.577449 systemd-logind[1490]: New session 16 of user core. Mar 4 02:15:08.585048 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 02:15:09.083272 sshd[4149]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:09.087814 systemd[1]: sshd@15-10.230.63.210:22-20.161.92.111:59696.service: Deactivated successfully. Mar 4 02:15:09.090608 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 02:15:09.093390 systemd-logind[1490]: Session 16 logged out. Waiting for processes to exit. Mar 4 02:15:09.095354 systemd-logind[1490]: Removed session 16. Mar 4 02:15:14.190179 systemd[1]: Started sshd@16-10.230.63.210:22-20.161.92.111:40642.service - OpenSSH per-connection server daemon (20.161.92.111:40642). Mar 4 02:15:14.787329 sshd[4164]: Accepted publickey for core from 20.161.92.111 port 40642 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:14.789809 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:14.797370 systemd-logind[1490]: New session 17 of user core. Mar 4 02:15:14.808167 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 02:15:15.300505 sshd[4164]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:15.305918 systemd-logind[1490]: Session 17 logged out. Waiting for processes to exit. Mar 4 02:15:15.306134 systemd[1]: sshd@16-10.230.63.210:22-20.161.92.111:40642.service: Deactivated successfully. Mar 4 02:15:15.308967 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 02:15:15.311756 systemd-logind[1490]: Removed session 17. Mar 4 02:15:15.409816 systemd[1]: Started sshd@17-10.230.63.210:22-20.161.92.111:40648.service - OpenSSH per-connection server daemon (20.161.92.111:40648). Mar 4 02:15:15.994541 sshd[4177]: Accepted publickey for core from 20.161.92.111 port 40648 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:15.995552 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:16.003681 systemd-logind[1490]: New session 18 of user core. Mar 4 02:15:16.015108 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 02:15:16.890506 sshd[4177]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:16.898428 systemd[1]: sshd@17-10.230.63.210:22-20.161.92.111:40648.service: Deactivated successfully. Mar 4 02:15:16.901640 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 02:15:16.904330 systemd-logind[1490]: Session 18 logged out. Waiting for processes to exit. Mar 4 02:15:16.906920 systemd-logind[1490]: Removed session 18. Mar 4 02:15:16.997212 systemd[1]: Started sshd@18-10.230.63.210:22-20.161.92.111:40650.service - OpenSSH per-connection server daemon (20.161.92.111:40650). Mar 4 02:15:17.608759 sshd[4188]: Accepted publickey for core from 20.161.92.111 port 40650 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:17.611260 sshd[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:17.619716 systemd-logind[1490]: New session 19 of user core. Mar 4 02:15:17.627063 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 02:15:18.879208 sshd[4188]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:18.885241 systemd[1]: sshd@18-10.230.63.210:22-20.161.92.111:40650.service: Deactivated successfully. Mar 4 02:15:18.888592 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 02:15:18.889798 systemd-logind[1490]: Session 19 logged out. Waiting for processes to exit. Mar 4 02:15:18.891423 systemd-logind[1490]: Removed session 19. Mar 4 02:15:18.988244 systemd[1]: Started sshd@19-10.230.63.210:22-20.161.92.111:40666.service - OpenSSH per-connection server daemon (20.161.92.111:40666). Mar 4 02:15:19.562889 sshd[4204]: Accepted publickey for core from 20.161.92.111 port 40666 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:19.565297 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:19.573011 systemd-logind[1490]: New session 20 of user core. Mar 4 02:15:19.579043 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 02:15:20.283295 sshd[4204]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:20.291242 systemd[1]: sshd@19-10.230.63.210:22-20.161.92.111:40666.service: Deactivated successfully. Mar 4 02:15:20.294627 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 02:15:20.296426 systemd-logind[1490]: Session 20 logged out. Waiting for processes to exit. Mar 4 02:15:20.298514 systemd-logind[1490]: Removed session 20. Mar 4 02:15:20.401775 systemd[1]: Started sshd@20-10.230.63.210:22-20.161.92.111:49326.service - OpenSSH per-connection server daemon (20.161.92.111:49326). Mar 4 02:15:20.989876 sshd[4217]: Accepted publickey for core from 20.161.92.111 port 49326 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:20.992659 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:21.011913 systemd-logind[1490]: New session 21 of user core. Mar 4 02:15:21.022221 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 02:15:21.248346 systemd[1]: Started sshd@21-10.230.63.210:22-101.47.140.127:51724.service - OpenSSH per-connection server daemon (101.47.140.127:51724). Mar 4 02:15:21.497614 sshd[4217]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:21.504689 systemd[1]: sshd@20-10.230.63.210:22-20.161.92.111:49326.service: Deactivated successfully. Mar 4 02:15:21.507699 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 02:15:21.509327 systemd-logind[1490]: Session 21 logged out. Waiting for processes to exit. Mar 4 02:15:21.510806 systemd-logind[1490]: Removed session 21. Mar 4 02:15:23.536935 sshd[4221]: Received disconnect from 101.47.140.127 port 51724:11: Bye Bye [preauth] Mar 4 02:15:23.536935 sshd[4221]: Disconnected from authenticating user root 101.47.140.127 port 51724 [preauth] Mar 4 02:15:23.540919 systemd[1]: sshd@21-10.230.63.210:22-101.47.140.127:51724.service: Deactivated successfully. Mar 4 02:15:26.602267 systemd[1]: Started sshd@22-10.230.63.210:22-20.161.92.111:49340.service - OpenSSH per-connection server daemon (20.161.92.111:49340). Mar 4 02:15:27.167492 sshd[4237]: Accepted publickey for core from 20.161.92.111 port 49340 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:27.168574 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:27.176361 systemd-logind[1490]: New session 22 of user core. Mar 4 02:15:27.184344 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 02:15:27.658766 sshd[4237]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:27.665476 systemd[1]: sshd@22-10.230.63.210:22-20.161.92.111:49340.service: Deactivated successfully. Mar 4 02:15:27.668385 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 02:15:27.670000 systemd-logind[1490]: Session 22 logged out. Waiting for processes to exit. Mar 4 02:15:27.671994 systemd-logind[1490]: Removed session 22. Mar 4 02:15:32.771284 systemd[1]: Started sshd@23-10.230.63.210:22-20.161.92.111:47110.service - OpenSSH per-connection server daemon (20.161.92.111:47110). Mar 4 02:15:33.341608 sshd[4250]: Accepted publickey for core from 20.161.92.111 port 47110 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:33.344103 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:33.353679 systemd-logind[1490]: New session 23 of user core. Mar 4 02:15:33.362088 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 02:15:33.837138 sshd[4250]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:33.843367 systemd[1]: sshd@23-10.230.63.210:22-20.161.92.111:47110.service: Deactivated successfully. Mar 4 02:15:33.847093 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 02:15:33.849026 systemd-logind[1490]: Session 23 logged out. Waiting for processes to exit. Mar 4 02:15:33.850854 systemd-logind[1490]: Removed session 23. Mar 4 02:15:33.947174 systemd[1]: Started sshd@24-10.230.63.210:22-20.161.92.111:47124.service - OpenSSH per-connection server daemon (20.161.92.111:47124). Mar 4 02:15:34.548688 sshd[4264]: Accepted publickey for core from 20.161.92.111 port 47124 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:34.551157 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:34.559282 systemd-logind[1490]: New session 24 of user core. Mar 4 02:15:34.565082 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 02:15:37.049496 kubelet[2669]: I0304 02:15:37.049290 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-p4xdv" podStartSLOduration=118.049253755 podStartE2EDuration="1m58.049253755s" podCreationTimestamp="2026-03-04 02:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:14:11.83431087 +0000 UTC m=+38.739102884" watchObservedRunningTime="2026-03-04 02:15:37.049253755 +0000 UTC m=+123.954045776" Mar 4 02:15:37.106563 containerd[1512]: time="2026-03-04T02:15:37.106293307Z" level=info msg="StopContainer for \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\" with timeout 30 (s)" Mar 4 02:15:37.110375 containerd[1512]: time="2026-03-04T02:15:37.107143411Z" level=info msg="Stop container \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\" with signal terminated" Mar 4 02:15:37.164469 systemd[1]: cri-containerd-3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0.scope: Deactivated successfully. Mar 4 02:15:37.200298 containerd[1512]: time="2026-03-04T02:15:37.200165759Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 02:15:37.219526 containerd[1512]: time="2026-03-04T02:15:37.218736020Z" level=info msg="StopContainer for \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\" with timeout 2 (s)" Mar 4 02:15:37.220360 containerd[1512]: time="2026-03-04T02:15:37.220155845Z" level=info msg="Stop container \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\" with signal terminated" Mar 4 02:15:37.230229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0-rootfs.mount: Deactivated successfully. Mar 4 02:15:37.237878 containerd[1512]: time="2026-03-04T02:15:37.237367374Z" level=info msg="shim disconnected" id=3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0 namespace=k8s.io Mar 4 02:15:37.238061 containerd[1512]: time="2026-03-04T02:15:37.237892358Z" level=warning msg="cleaning up after shim disconnected" id=3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0 namespace=k8s.io Mar 4 02:15:37.238061 containerd[1512]: time="2026-03-04T02:15:37.238031248Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:37.241540 systemd-networkd[1438]: lxc_health: Link DOWN Mar 4 02:15:37.241551 systemd-networkd[1438]: lxc_health: Lost carrier Mar 4 02:15:37.270404 systemd[1]: cri-containerd-b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6.scope: Deactivated successfully. Mar 4 02:15:37.271173 systemd[1]: cri-containerd-b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6.scope: Consumed 10.608s CPU time. Mar 4 02:15:37.281592 containerd[1512]: time="2026-03-04T02:15:37.281534851Z" level=info msg="StopContainer for \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\" returns successfully" Mar 4 02:15:37.283009 containerd[1512]: time="2026-03-04T02:15:37.282787072Z" level=info msg="StopPodSandbox for \"7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f\"" Mar 4 02:15:37.283009 containerd[1512]: time="2026-03-04T02:15:37.282853539Z" level=info msg="Container to stop \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:15:37.286798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f-shm.mount: Deactivated successfully. Mar 4 02:15:37.301953 systemd[1]: cri-containerd-7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f.scope: Deactivated successfully. Mar 4 02:15:37.322044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6-rootfs.mount: Deactivated successfully. Mar 4 02:15:37.334863 containerd[1512]: time="2026-03-04T02:15:37.333655630Z" level=info msg="shim disconnected" id=b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6 namespace=k8s.io Mar 4 02:15:37.334863 containerd[1512]: time="2026-03-04T02:15:37.333750921Z" level=warning msg="cleaning up after shim disconnected" id=b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6 namespace=k8s.io Mar 4 02:15:37.334863 containerd[1512]: time="2026-03-04T02:15:37.333771425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:37.347509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f-rootfs.mount: Deactivated successfully. Mar 4 02:15:37.353309 containerd[1512]: time="2026-03-04T02:15:37.353222218Z" level=info msg="shim disconnected" id=7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f namespace=k8s.io Mar 4 02:15:37.353611 containerd[1512]: time="2026-03-04T02:15:37.353582431Z" level=warning msg="cleaning up after shim disconnected" id=7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f namespace=k8s.io Mar 4 02:15:37.353809 containerd[1512]: time="2026-03-04T02:15:37.353772159Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:37.369189 containerd[1512]: time="2026-03-04T02:15:37.369115553Z" level=info msg="StopContainer for \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\" returns successfully" Mar 4 02:15:37.370036 containerd[1512]: time="2026-03-04T02:15:37.370003638Z" level=info msg="StopPodSandbox for \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\"" Mar 4 02:15:37.370235 containerd[1512]: time="2026-03-04T02:15:37.370204026Z" level=info msg="Container to stop \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:15:37.370380 containerd[1512]: time="2026-03-04T02:15:37.370351096Z" level=info msg="Container to stop \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:15:37.370485 containerd[1512]: time="2026-03-04T02:15:37.370459782Z" level=info msg="Container to stop \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:15:37.370587 containerd[1512]: time="2026-03-04T02:15:37.370562588Z" level=info msg="Container to stop \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:15:37.370702 containerd[1512]: time="2026-03-04T02:15:37.370663216Z" level=info msg="Container to stop \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:15:37.384278 systemd[1]: cri-containerd-2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86.scope: Deactivated successfully. Mar 4 02:15:37.396908 containerd[1512]: time="2026-03-04T02:15:37.395916813Z" level=info msg="TearDown network for sandbox \"7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f\" successfully" Mar 4 02:15:37.396908 containerd[1512]: time="2026-03-04T02:15:37.395980471Z" level=info msg="StopPodSandbox for \"7a001719d33cb8268efb40ff537f458a295882e1a437038c6933e353c596b75f\" returns successfully" Mar 4 02:15:37.437612 containerd[1512]: time="2026-03-04T02:15:37.437284832Z" level=info msg="shim disconnected" id=2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86 namespace=k8s.io Mar 4 02:15:37.437612 containerd[1512]: time="2026-03-04T02:15:37.437364144Z" level=warning msg="cleaning up after shim disconnected" id=2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86 namespace=k8s.io Mar 4 02:15:37.437612 containerd[1512]: time="2026-03-04T02:15:37.437379679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:37.462611 containerd[1512]: time="2026-03-04T02:15:37.462387096Z" level=info msg="TearDown network for sandbox \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" successfully" Mar 4 02:15:37.462611 containerd[1512]: time="2026-03-04T02:15:37.462455098Z" level=info msg="StopPodSandbox for \"2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86\" returns successfully" Mar 4 02:15:37.528383 kubelet[2669]: I0304 02:15:37.527914 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-cilium-config-path\") pod \"9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8\" (UID: \"9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8\") " Mar 4 02:15:37.528383 kubelet[2669]: I0304 02:15:37.528012 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-bpf-maps\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.528383 kubelet[2669]: I0304 02:15:37.528048 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-lib-modules\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.528383 kubelet[2669]: I0304 02:15:37.528076 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-run\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.528383 kubelet[2669]: I0304 02:15:37.528128 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-kube-api-access-ph2xm\" (UniqueName: \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-kube-api-access-ph2xm\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.528867 kubelet[2669]: I0304 02:15:37.528165 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-etc-cni-netd\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.528867 kubelet[2669]: I0304 02:15:37.528193 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cni-path\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cni-path\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.528867 kubelet[2669]: I0304 02:15:37.528228 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-kube-api-access-pqxl4\" (UniqueName: \"kubernetes.io/projected/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-kube-api-access-pqxl4\") pod \"9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8\" (UID: \"9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8\") " Mar 4 02:15:37.529521 kubelet[2669]: I0304 02:15:37.528258 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-hostproc\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-hostproc\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.529521 kubelet[2669]: I0304 02:15:37.529115 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-net\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.529521 kubelet[2669]: I0304 02:15:37.529162 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-config-path\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.529521 kubelet[2669]: I0304 02:15:37.529198 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-hubble-tls\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.529521 kubelet[2669]: I0304 02:15:37.529235 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/0b89d62d-ed62-44db-87a6-a787e04c7162-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b89d62d-ed62-44db-87a6-a787e04c7162-clustermesh-secrets\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.529791 kubelet[2669]: I0304 02:15:37.529268 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-xtables-lock\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.529791 kubelet[2669]: I0304 02:15:37.529317 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-kernel\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.529791 kubelet[2669]: I0304 02:15:37.529355 2669 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-cgroup\") pod \"0b89d62d-ed62-44db-87a6-a787e04c7162\" (UID: \"0b89d62d-ed62-44db-87a6-a787e04c7162\") " Mar 4 02:15:37.538779 kubelet[2669]: I0304 02:15:37.536920 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-cgroup" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.539625 kubelet[2669]: I0304 02:15:37.538019 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-cilium-config-path" pod "9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8" (UID: "9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 02:15:37.539625 kubelet[2669]: I0304 02:15:37.538984 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-hostproc" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.539625 kubelet[2669]: I0304 02:15:37.539025 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-net" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.543163 kubelet[2669]: I0304 02:15:37.543020 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-config-path" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 02:15:37.548511 kubelet[2669]: I0304 02:15:37.547979 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-kube-api-access-pqxl4" pod "9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8" (UID: "9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8"). InnerVolumeSpecName "kube-api-access-pqxl4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 02:15:37.548511 kubelet[2669]: I0304 02:15:37.548037 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-hubble-tls" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 02:15:37.548511 kubelet[2669]: I0304 02:15:37.548080 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-bpf-maps" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.548511 kubelet[2669]: I0304 02:15:37.548114 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-lib-modules" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.548511 kubelet[2669]: I0304 02:15:37.548151 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-run" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.550296 kubelet[2669]: I0304 02:15:37.550230 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b89d62d-ed62-44db-87a6-a787e04c7162-clustermesh-secrets" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 02:15:37.550396 kubelet[2669]: I0304 02:15:37.550351 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-xtables-lock" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.550796 kubelet[2669]: I0304 02:15:37.550395 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-kernel" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.550796 kubelet[2669]: I0304 02:15:37.550464 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-etc-cni-netd" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.550796 kubelet[2669]: I0304 02:15:37.550529 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cni-path" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 4 02:15:37.552224 kubelet[2669]: I0304 02:15:37.552109 2669 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-kube-api-access-ph2xm" pod "0b89d62d-ed62-44db-87a6-a787e04c7162" (UID: "0b89d62d-ed62-44db-87a6-a787e04c7162"). InnerVolumeSpecName "kube-api-access-ph2xm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630395 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-cgroup\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630460 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-cilium-config-path\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630480 2669 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-bpf-maps\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630500 2669 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-lib-modules\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630516 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-run\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630531 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ph2xm\" (UniqueName: \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-kube-api-access-ph2xm\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630546 2669 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-etc-cni-netd\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.630825 kubelet[2669]: I0304 02:15:37.630561 2669 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-cni-path\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630576 2669 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pqxl4\" (UniqueName: \"kubernetes.io/projected/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8-kube-api-access-pqxl4\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630590 2669 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-hostproc\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630612 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-net\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630630 2669 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b89d62d-ed62-44db-87a6-a787e04c7162-cilium-config-path\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630728 2669 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b89d62d-ed62-44db-87a6-a787e04c7162-hubble-tls\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630748 2669 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b89d62d-ed62-44db-87a6-a787e04c7162-clustermesh-secrets\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630763 2669 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-xtables-lock\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:37.631497 kubelet[2669]: I0304 02:15:37.630781 2669 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b89d62d-ed62-44db-87a6-a787e04c7162-host-proc-sys-kernel\") on node \"srv-323j1.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:15:38.028477 kubelet[2669]: I0304 02:15:38.028427 2669 scope.go:122] "RemoveContainer" containerID="3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0" Mar 4 02:15:38.041516 systemd[1]: Removed slice kubepods-besteffort-pod9e3f2bf0_b0dd_4fa0_88aa_46b101afe4b8.slice - libcontainer container kubepods-besteffort-pod9e3f2bf0_b0dd_4fa0_88aa_46b101afe4b8.slice. Mar 4 02:15:38.045287 containerd[1512]: time="2026-03-04T02:15:38.043022414Z" level=info msg="RemoveContainer for \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\"" Mar 4 02:15:38.063759 containerd[1512]: time="2026-03-04T02:15:38.063512226Z" level=info msg="RemoveContainer for \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\" returns successfully" Mar 4 02:15:38.080919 kubelet[2669]: I0304 02:15:38.079662 2669 scope.go:122] "RemoveContainer" containerID="3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0" Mar 4 02:15:38.086087 systemd[1]: Removed slice kubepods-burstable-pod0b89d62d_ed62_44db_87a6_a787e04c7162.slice - libcontainer container kubepods-burstable-pod0b89d62d_ed62_44db_87a6_a787e04c7162.slice. Mar 4 02:15:38.086777 systemd[1]: kubepods-burstable-pod0b89d62d_ed62_44db_87a6_a787e04c7162.slice: Consumed 10.748s CPU time. Mar 4 02:15:38.094345 containerd[1512]: time="2026-03-04T02:15:38.084641316Z" level=error msg="ContainerStatus for \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\": not found" Mar 4 02:15:38.097329 kubelet[2669]: E0304 02:15:38.097273 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\": not found" containerID="3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0" Mar 4 02:15:38.100627 kubelet[2669]: I0304 02:15:38.100264 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0"} err="failed to get container status \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\": rpc error: code = NotFound desc = an error occurred when try to find container \"3772056441a3d5dce360adf4c8f5efa134fb95c5f114b1b744b63955dff0acf0\": not found" Mar 4 02:15:38.100627 kubelet[2669]: I0304 02:15:38.100554 2669 scope.go:122] "RemoveContainer" containerID="b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6" Mar 4 02:15:38.103777 containerd[1512]: time="2026-03-04T02:15:38.103422018Z" level=info msg="RemoveContainer for \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\"" Mar 4 02:15:38.112251 containerd[1512]: time="2026-03-04T02:15:38.112202409Z" level=info msg="RemoveContainer for \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\" returns successfully" Mar 4 02:15:38.113216 kubelet[2669]: I0304 02:15:38.113178 2669 scope.go:122] "RemoveContainer" containerID="fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384" Mar 4 02:15:38.117092 containerd[1512]: time="2026-03-04T02:15:38.117046536Z" level=info msg="RemoveContainer for \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\"" Mar 4 02:15:38.126608 containerd[1512]: time="2026-03-04T02:15:38.126434045Z" level=info msg="RemoveContainer for \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\" returns successfully" Mar 4 02:15:38.127246 kubelet[2669]: I0304 02:15:38.127201 2669 scope.go:122] "RemoveContainer" containerID="554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2" Mar 4 02:15:38.132442 containerd[1512]: time="2026-03-04T02:15:38.132387607Z" level=info msg="RemoveContainer for \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\"" Mar 4 02:15:38.139452 containerd[1512]: time="2026-03-04T02:15:38.139385449Z" level=info msg="RemoveContainer for \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\" returns successfully" Mar 4 02:15:38.140294 kubelet[2669]: I0304 02:15:38.139800 2669 scope.go:122] "RemoveContainer" containerID="fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7" Mar 4 02:15:38.141498 containerd[1512]: time="2026-03-04T02:15:38.141450826Z" level=info msg="RemoveContainer for \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\"" Mar 4 02:15:38.145410 containerd[1512]: time="2026-03-04T02:15:38.145366493Z" level=info msg="RemoveContainer for \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\" returns successfully" Mar 4 02:15:38.145850 kubelet[2669]: I0304 02:15:38.145693 2669 scope.go:122] "RemoveContainer" containerID="10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f" Mar 4 02:15:38.148377 containerd[1512]: time="2026-03-04T02:15:38.147992933Z" level=info msg="RemoveContainer for \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\"" Mar 4 02:15:38.153541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86-rootfs.mount: Deactivated successfully. Mar 4 02:15:38.153911 containerd[1512]: time="2026-03-04T02:15:38.153858606Z" level=info msg="RemoveContainer for \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\" returns successfully" Mar 4 02:15:38.154016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f344f335e06895e5c2463c76b43b66f097ec6f7c0cb76b09ec91445fdee8a86-shm.mount: Deactivated successfully. Mar 4 02:15:38.154162 systemd[1]: var-lib-kubelet-pods-0b89d62d\x2ded62\x2d44db\x2d87a6\x2da787e04c7162-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 4 02:15:38.154282 systemd[1]: var-lib-kubelet-pods-0b89d62d\x2ded62\x2d44db\x2d87a6\x2da787e04c7162-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 4 02:15:38.154400 systemd[1]: var-lib-kubelet-pods-9e3f2bf0\x2db0dd\x2d4fa0\x2d88aa\x2d46b101afe4b8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpqxl4.mount: Deactivated successfully. Mar 4 02:15:38.154524 systemd[1]: var-lib-kubelet-pods-0b89d62d\x2ded62\x2d44db\x2d87a6\x2da787e04c7162-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dph2xm.mount: Deactivated successfully. Mar 4 02:15:38.156762 kubelet[2669]: I0304 02:15:38.156164 2669 scope.go:122] "RemoveContainer" containerID="b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6" Mar 4 02:15:38.157091 containerd[1512]: time="2026-03-04T02:15:38.157043916Z" level=error msg="ContainerStatus for \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\": not found" Mar 4 02:15:38.157890 kubelet[2669]: E0304 02:15:38.157618 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\": not found" containerID="b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6" Mar 4 02:15:38.158317 kubelet[2669]: I0304 02:15:38.157925 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6"} err="failed to get container status \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b0a7fe526fc7760ef0e9bd660053b9519d0b581d0808ad73f595e81e91d140f6\": not found" Mar 4 02:15:38.158317 kubelet[2669]: I0304 02:15:38.158093 2669 scope.go:122] "RemoveContainer" containerID="fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384" Mar 4 02:15:38.160185 containerd[1512]: time="2026-03-04T02:15:38.159610626Z" level=error msg="ContainerStatus for \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\": not found" Mar 4 02:15:38.160713 kubelet[2669]: E0304 02:15:38.160427 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\": not found" containerID="fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384" Mar 4 02:15:38.160965 kubelet[2669]: I0304 02:15:38.160595 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384"} err="failed to get container status \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb20c1d06b60194f1dbbe028647c4db67e71c6de0898a0b80dbdece4f5927384\": not found" Mar 4 02:15:38.161279 kubelet[2669]: I0304 02:15:38.160879 2669 scope.go:122] "RemoveContainer" containerID="554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2" Mar 4 02:15:38.161568 containerd[1512]: time="2026-03-04T02:15:38.161523063Z" level=error msg="ContainerStatus for \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\": not found" Mar 4 02:15:38.161786 kubelet[2669]: E0304 02:15:38.161751 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\": not found" containerID="554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2" Mar 4 02:15:38.161896 kubelet[2669]: I0304 02:15:38.161793 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2"} err="failed to get container status \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"554af391072f2fcc0ff193b8b6160404bf985e9c0ea8f94191c77addefd050c2\": not found" Mar 4 02:15:38.161896 kubelet[2669]: I0304 02:15:38.161822 2669 scope.go:122] "RemoveContainer" containerID="fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7" Mar 4 02:15:38.162165 containerd[1512]: time="2026-03-04T02:15:38.162122184Z" level=error msg="ContainerStatus for \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\": not found" Mar 4 02:15:38.162513 kubelet[2669]: E0304 02:15:38.162400 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\": not found" containerID="fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7" Mar 4 02:15:38.162719 kubelet[2669]: I0304 02:15:38.162596 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7"} err="failed to get container status \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\": rpc error: code = NotFound desc = an error occurred when try to find container \"fbc7ac2ae96c654afe53e6a6cee0ac53ca52ea4963f93125a6371340bd05dad7\": not found" Mar 4 02:15:38.162719 kubelet[2669]: I0304 02:15:38.162623 2669 scope.go:122] "RemoveContainer" containerID="10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f" Mar 4 02:15:38.163179 containerd[1512]: time="2026-03-04T02:15:38.163135150Z" level=error msg="ContainerStatus for \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\": not found" Mar 4 02:15:38.163440 kubelet[2669]: E0304 02:15:38.163305 2669 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\": not found" containerID="10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f" Mar 4 02:15:38.163440 kubelet[2669]: I0304 02:15:38.163336 2669 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f"} err="failed to get container status \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\": rpc error: code = NotFound desc = an error occurred when try to find container \"10da9befcbfd55c67c5720d2135fb565d278252da0d9bcd5e85da05d566d141f\": not found" Mar 4 02:15:38.631055 kubelet[2669]: E0304 02:15:38.630987 2669 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 4 02:15:39.080810 sshd[4264]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:39.085445 systemd[1]: sshd@24-10.230.63.210:22-20.161.92.111:47124.service: Deactivated successfully. Mar 4 02:15:39.088373 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 02:15:39.088718 systemd[1]: session-24.scope: Consumed 1.518s CPU time. Mar 4 02:15:39.090363 systemd-logind[1490]: Session 24 logged out. Waiting for processes to exit. Mar 4 02:15:39.092718 systemd-logind[1490]: Removed session 24. Mar 4 02:15:39.233269 systemd[1]: Started sshd@25-10.230.63.210:22-20.161.92.111:47138.service - OpenSSH per-connection server daemon (20.161.92.111:47138). Mar 4 02:15:39.423004 kubelet[2669]: I0304 02:15:39.422814 2669 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0b89d62d-ed62-44db-87a6-a787e04c7162" path="/var/lib/kubelet/pods/0b89d62d-ed62-44db-87a6-a787e04c7162/volumes" Mar 4 02:15:39.424391 kubelet[2669]: I0304 02:15:39.424354 2669 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8" path="/var/lib/kubelet/pods/9e3f2bf0-b0dd-4fa0-88aa-46b101afe4b8/volumes" Mar 4 02:15:39.828026 sshd[4427]: Accepted publickey for core from 20.161.92.111 port 47138 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:39.830445 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:39.837755 systemd-logind[1490]: New session 25 of user core. Mar 4 02:15:39.845099 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 4 02:15:41.664026 kubelet[2669]: I0304 02:15:41.662038 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-hostproc\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.664026 kubelet[2669]: I0304 02:15:41.662103 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c52c609-1919-46fb-ba45-c8ae83373209-cilium-config-path\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.664026 kubelet[2669]: I0304 02:15:41.662141 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-bpf-maps\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.664026 kubelet[2669]: I0304 02:15:41.662171 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-etc-cni-netd\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.664026 kubelet[2669]: I0304 02:15:41.662207 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-cilium-run\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.664026 kubelet[2669]: I0304 02:15:41.662249 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-xtables-lock\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.663199 systemd[1]: Created slice kubepods-burstable-pod3c52c609_1919_46fb_ba45_c8ae83373209.slice - libcontainer container kubepods-burstable-pod3c52c609_1919_46fb_ba45_c8ae83373209.slice. Mar 4 02:15:41.665285 kubelet[2669]: I0304 02:15:41.662276 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c52c609-1919-46fb-ba45-c8ae83373209-clustermesh-secrets\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665285 kubelet[2669]: I0304 02:15:41.662306 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-cni-path\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665285 kubelet[2669]: I0304 02:15:41.662335 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-host-proc-sys-net\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665285 kubelet[2669]: I0304 02:15:41.662363 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-cilium-cgroup\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665285 kubelet[2669]: I0304 02:15:41.662400 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-lib-modules\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665285 kubelet[2669]: I0304 02:15:41.662433 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c52c609-1919-46fb-ba45-c8ae83373209-cilium-ipsec-secrets\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665520 kubelet[2669]: I0304 02:15:41.662460 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c52c609-1919-46fb-ba45-c8ae83373209-host-proc-sys-kernel\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665520 kubelet[2669]: I0304 02:15:41.662486 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkwxm\" (UniqueName: \"kubernetes.io/projected/3c52c609-1919-46fb-ba45-c8ae83373209-kube-api-access-vkwxm\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.665520 kubelet[2669]: I0304 02:15:41.662521 2669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c52c609-1919-46fb-ba45-c8ae83373209-hubble-tls\") pod \"cilium-v5649\" (UID: \"3c52c609-1919-46fb-ba45-c8ae83373209\") " pod="kube-system/cilium-v5649" Mar 4 02:15:41.675528 sshd[4427]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:41.682584 systemd[1]: sshd@25-10.230.63.210:22-20.161.92.111:47138.service: Deactivated successfully. Mar 4 02:15:41.688232 systemd[1]: session-25.scope: Deactivated successfully. Mar 4 02:15:41.688473 systemd[1]: session-25.scope: Consumed 1.343s CPU time. Mar 4 02:15:41.691309 systemd-logind[1490]: Session 25 logged out. Waiting for processes to exit. Mar 4 02:15:41.694862 systemd-logind[1490]: Removed session 25. Mar 4 02:15:41.819963 systemd[1]: Started sshd@26-10.230.63.210:22-20.161.92.111:57956.service - OpenSSH per-connection server daemon (20.161.92.111:57956). Mar 4 02:15:41.978630 containerd[1512]: time="2026-03-04T02:15:41.978375767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5649,Uid:3c52c609-1919-46fb-ba45-c8ae83373209,Namespace:kube-system,Attempt:0,}" Mar 4 02:15:42.077793 containerd[1512]: time="2026-03-04T02:15:42.077552953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:15:42.077793 containerd[1512]: time="2026-03-04T02:15:42.077697093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:15:42.077793 containerd[1512]: time="2026-03-04T02:15:42.077740082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:15:42.078436 containerd[1512]: time="2026-03-04T02:15:42.077936463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:15:42.106168 systemd[1]: Started cri-containerd-7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4.scope - libcontainer container 7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4. Mar 4 02:15:42.146405 containerd[1512]: time="2026-03-04T02:15:42.146010152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v5649,Uid:3c52c609-1919-46fb-ba45-c8ae83373209,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\"" Mar 4 02:15:42.156597 containerd[1512]: time="2026-03-04T02:15:42.156383364Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 4 02:15:42.170213 containerd[1512]: time="2026-03-04T02:15:42.170144044Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f\"" Mar 4 02:15:42.171443 containerd[1512]: time="2026-03-04T02:15:42.171395419Z" level=info msg="StartContainer for \"697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f\"" Mar 4 02:15:42.209078 systemd[1]: Started cri-containerd-697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f.scope - libcontainer container 697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f. Mar 4 02:15:42.253566 containerd[1512]: time="2026-03-04T02:15:42.253412933Z" level=info msg="StartContainer for \"697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f\" returns successfully" Mar 4 02:15:42.275732 systemd[1]: cri-containerd-697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f.scope: Deactivated successfully. Mar 4 02:15:42.318985 containerd[1512]: time="2026-03-04T02:15:42.318892785Z" level=info msg="shim disconnected" id=697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f namespace=k8s.io Mar 4 02:15:42.319277 containerd[1512]: time="2026-03-04T02:15:42.319248900Z" level=warning msg="cleaning up after shim disconnected" id=697e571d6d7e60e2013fef07081cd4723b39d01415780efcddfc9753c421ff4f namespace=k8s.io Mar 4 02:15:42.319378 containerd[1512]: time="2026-03-04T02:15:42.319354713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:42.419981 sshd[4444]: Accepted publickey for core from 20.161.92.111 port 57956 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:42.421816 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:42.429332 systemd-logind[1490]: New session 26 of user core. Mar 4 02:15:42.435084 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 4 02:15:42.832751 sshd[4444]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:42.836800 systemd[1]: sshd@26-10.230.63.210:22-20.161.92.111:57956.service: Deactivated successfully. Mar 4 02:15:42.840478 systemd[1]: session-26.scope: Deactivated successfully. Mar 4 02:15:42.842621 systemd-logind[1490]: Session 26 logged out. Waiting for processes to exit. Mar 4 02:15:42.844309 systemd-logind[1490]: Removed session 26. Mar 4 02:15:42.938656 systemd[1]: Started sshd@27-10.230.63.210:22-20.161.92.111:57972.service - OpenSSH per-connection server daemon (20.161.92.111:57972). Mar 4 02:15:43.082523 containerd[1512]: time="2026-03-04T02:15:43.082434875Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 4 02:15:43.115727 containerd[1512]: time="2026-03-04T02:15:43.115525254Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2\"" Mar 4 02:15:43.116941 containerd[1512]: time="2026-03-04T02:15:43.116530256Z" level=info msg="StartContainer for \"22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2\"" Mar 4 02:15:43.174133 systemd[1]: Started cri-containerd-22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2.scope - libcontainer container 22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2. Mar 4 02:15:43.234014 containerd[1512]: time="2026-03-04T02:15:43.233949060Z" level=info msg="StartContainer for \"22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2\" returns successfully" Mar 4 02:15:43.252061 systemd[1]: cri-containerd-22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2.scope: Deactivated successfully. Mar 4 02:15:43.288352 containerd[1512]: time="2026-03-04T02:15:43.288201420Z" level=info msg="shim disconnected" id=22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2 namespace=k8s.io Mar 4 02:15:43.288352 containerd[1512]: time="2026-03-04T02:15:43.288307457Z" level=warning msg="cleaning up after shim disconnected" id=22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2 namespace=k8s.io Mar 4 02:15:43.289251 containerd[1512]: time="2026-03-04T02:15:43.288324387Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:43.524885 sshd[4556]: Accepted publickey for core from 20.161.92.111 port 57972 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:15:43.527152 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:15:43.535865 systemd-logind[1490]: New session 27 of user core. Mar 4 02:15:43.546209 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 4 02:15:43.632604 kubelet[2669]: E0304 02:15:43.632289 2669 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 4 02:15:43.784199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22943d7e042b751336e85eb175690a6a5b0765a7401b804df055d1a248c74ef2-rootfs.mount: Deactivated successfully. Mar 4 02:15:44.113351 containerd[1512]: time="2026-03-04T02:15:44.112531980Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 4 02:15:44.146258 containerd[1512]: time="2026-03-04T02:15:44.145989769Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061\"" Mar 4 02:15:44.147403 containerd[1512]: time="2026-03-04T02:15:44.147346217Z" level=info msg="StartContainer for \"28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061\"" Mar 4 02:15:44.198195 systemd[1]: Started cri-containerd-28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061.scope - libcontainer container 28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061. Mar 4 02:15:44.250390 containerd[1512]: time="2026-03-04T02:15:44.250325829Z" level=info msg="StartContainer for \"28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061\" returns successfully" Mar 4 02:15:44.258371 systemd[1]: cri-containerd-28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061.scope: Deactivated successfully. Mar 4 02:15:44.295222 containerd[1512]: time="2026-03-04T02:15:44.295052855Z" level=info msg="shim disconnected" id=28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061 namespace=k8s.io Mar 4 02:15:44.295222 containerd[1512]: time="2026-03-04T02:15:44.295139130Z" level=warning msg="cleaning up after shim disconnected" id=28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061 namespace=k8s.io Mar 4 02:15:44.295222 containerd[1512]: time="2026-03-04T02:15:44.295157070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:44.784552 systemd[1]: run-containerd-runc-k8s.io-28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061-runc.mRrtHR.mount: Deactivated successfully. Mar 4 02:15:44.784726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28a6735ef335b9499b2d6b797c69859063ddf2ccca5129b026a6fc957d7f4061-rootfs.mount: Deactivated successfully. Mar 4 02:15:45.090886 containerd[1512]: time="2026-03-04T02:15:45.090277050Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 4 02:15:45.116722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851157308.mount: Deactivated successfully. Mar 4 02:15:45.120767 containerd[1512]: time="2026-03-04T02:15:45.120694823Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283\"" Mar 4 02:15:45.123016 containerd[1512]: time="2026-03-04T02:15:45.122542402Z" level=info msg="StartContainer for \"a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283\"" Mar 4 02:15:45.177900 systemd[1]: Started cri-containerd-a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283.scope - libcontainer container a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283. Mar 4 02:15:45.215831 systemd[1]: cri-containerd-a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283.scope: Deactivated successfully. Mar 4 02:15:45.223220 containerd[1512]: time="2026-03-04T02:15:45.222784896Z" level=info msg="StartContainer for \"a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283\" returns successfully" Mar 4 02:15:45.254071 containerd[1512]: time="2026-03-04T02:15:45.253993951Z" level=info msg="shim disconnected" id=a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283 namespace=k8s.io Mar 4 02:15:45.254500 containerd[1512]: time="2026-03-04T02:15:45.254373792Z" level=warning msg="cleaning up after shim disconnected" id=a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283 namespace=k8s.io Mar 4 02:15:45.254500 containerd[1512]: time="2026-03-04T02:15:45.254426199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:15:45.784643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5de2d66cfee06a9b0fe6c8ee276d048b76627da8098ab6940a76c8f4b6b8283-rootfs.mount: Deactivated successfully. Mar 4 02:15:46.096487 containerd[1512]: time="2026-03-04T02:15:46.096195374Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 4 02:15:46.161828 containerd[1512]: time="2026-03-04T02:15:46.161758920Z" level=info msg="CreateContainer within sandbox \"7a8300582150bee1008f19a4a39672ed5e3607504e992f887cb38466881c58a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11\"" Mar 4 02:15:46.162746 containerd[1512]: time="2026-03-04T02:15:46.162673909Z" level=info msg="StartContainer for \"fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11\"" Mar 4 02:15:46.206070 systemd[1]: Started cri-containerd-fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11.scope - libcontainer container fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11. Mar 4 02:15:46.256518 containerd[1512]: time="2026-03-04T02:15:46.255984528Z" level=info msg="StartContainer for \"fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11\" returns successfully" Mar 4 02:15:46.741039 kubelet[2669]: I0304 02:15:46.737557 2669 setters.go:546] "Node became not ready" node="srv-323j1.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-04T02:15:46Z","lastTransitionTime":"2026-03-04T02:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 4 02:15:47.119164 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 4 02:15:50.578947 systemd[1]: run-containerd-runc-k8s.io-fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11-runc.0vFuYn.mount: Deactivated successfully. Mar 4 02:15:50.956139 systemd-networkd[1438]: lxc_health: Link UP Mar 4 02:15:50.991253 systemd-networkd[1438]: lxc_health: Gained carrier Mar 4 02:15:52.004596 kubelet[2669]: I0304 02:15:52.004459 2669 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-v5649" podStartSLOduration=11.004429791 podStartE2EDuration="11.004429791s" podCreationTimestamp="2026-03-04 02:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:15:47.140915607 +0000 UTC m=+134.045707626" watchObservedRunningTime="2026-03-04 02:15:52.004429791 +0000 UTC m=+138.909221799" Mar 4 02:15:52.958308 kubelet[2669]: E0304 02:15:52.958149 2669 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41728->127.0.0.1:36361: write tcp 127.0.0.1:41728->127.0.0.1:36361: write: broken pipe Mar 4 02:15:53.031324 systemd-networkd[1438]: lxc_health: Gained IPv6LL Mar 4 02:15:55.093084 systemd[1]: run-containerd-runc-k8s.io-fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11-runc.ssciDn.mount: Deactivated successfully. Mar 4 02:15:57.326121 systemd[1]: run-containerd-runc-k8s.io-fc9c2c99c9bbf168bd6f85cbbc979e6f0f5b901c9da8bc35402335be5372da11-runc.k1Mxrq.mount: Deactivated successfully. Mar 4 02:15:57.510614 sshd[4556]: pam_unix(sshd:session): session closed for user core Mar 4 02:15:57.515811 systemd-logind[1490]: Session 27 logged out. Waiting for processes to exit. Mar 4 02:15:57.520080 systemd[1]: sshd@27-10.230.63.210:22-20.161.92.111:57972.service: Deactivated successfully. Mar 4 02:15:57.523805 systemd[1]: session-27.scope: Deactivated successfully. Mar 4 02:15:57.526964 systemd-logind[1490]: Removed session 27.