Mar 12 05:03:00.052160 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Mar 11 23:23:33 -00 2026 Mar 12 05:03:00.052197 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 05:03:00.052211 kernel: BIOS-provided physical RAM map: Mar 12 05:03:00.052227 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 12 05:03:00.052238 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 12 05:03:00.052248 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 12 05:03:00.052260 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 12 05:03:00.052271 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 12 05:03:00.052281 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 12 05:03:00.052292 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 12 05:03:00.052302 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 12 05:03:00.052313 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 12 05:03:00.052329 kernel: NX (Execute Disable) protection: active Mar 12 05:03:00.052339 kernel: APIC: Static calls initialized Mar 12 05:03:00.052352 kernel: SMBIOS 2.8 present. Mar 12 05:03:00.052364 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 12 05:03:00.052376 kernel: Hypervisor detected: KVM Mar 12 05:03:00.052418 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 12 05:03:00.052432 kernel: kvm-clock: using sched offset of 4447774782 cycles Mar 12 05:03:00.052444 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 12 05:03:00.052456 kernel: tsc: Detected 2499.998 MHz processor Mar 12 05:03:00.052468 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 12 05:03:00.052480 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 12 05:03:00.052491 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 12 05:03:00.052503 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 12 05:03:00.052528 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 12 05:03:00.052547 kernel: Using GB pages for direct mapping Mar 12 05:03:00.052559 kernel: ACPI: Early table checksum verification disabled Mar 12 05:03:00.052571 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 12 05:03:00.052583 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:03:00.052595 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:03:00.052606 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:03:00.052618 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 12 05:03:00.052629 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:03:00.052641 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:03:00.052657 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:03:00.052669 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 12 05:03:00.052681 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 12 05:03:00.052693 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 12 05:03:00.052704 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 12 05:03:00.052723 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 12 05:03:00.052735 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 12 05:03:00.052752 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 12 05:03:00.052765 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 12 05:03:00.052777 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 12 05:03:00.052789 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 12 05:03:00.052809 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 12 05:03:00.052821 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 12 05:03:00.052833 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 12 05:03:00.052845 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 12 05:03:00.052863 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 12 05:03:00.052875 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 12 05:03:00.052887 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 12 05:03:00.052899 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 12 05:03:00.052911 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 12 05:03:00.052923 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 12 05:03:00.052935 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 12 05:03:00.052947 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 12 05:03:00.052960 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 12 05:03:00.052977 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 12 05:03:00.052990 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 12 05:03:00.053002 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 12 05:03:00.053014 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 12 05:03:00.053027 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 12 05:03:00.053039 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 12 05:03:00.053051 kernel: Zone ranges: Mar 12 05:03:00.053064 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 12 05:03:00.053076 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 12 05:03:00.053094 kernel: Normal empty Mar 12 05:03:00.053106 kernel: Movable zone start for each node Mar 12 05:03:00.053118 kernel: Early memory node ranges Mar 12 05:03:00.053130 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 12 05:03:00.053142 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 12 05:03:00.053155 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 12 05:03:00.053167 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 12 05:03:00.053179 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 12 05:03:00.053191 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 12 05:03:00.053203 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 12 05:03:00.053220 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 12 05:03:00.053232 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 12 05:03:00.053245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 12 05:03:00.053257 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 12 05:03:00.053269 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 12 05:03:00.053281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 12 05:03:00.053293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 12 05:03:00.053305 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 12 05:03:00.053317 kernel: TSC deadline timer available Mar 12 05:03:00.053349 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 12 05:03:00.053361 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 12 05:03:00.053380 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 12 05:03:00.053392 kernel: Booting paravirtualized kernel on KVM Mar 12 05:03:00.053438 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 12 05:03:00.053453 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 12 05:03:00.053465 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 12 05:03:00.053476 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 12 05:03:00.053518 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 12 05:03:00.053538 kernel: kvm-guest: PV spinlocks enabled Mar 12 05:03:00.053551 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 12 05:03:00.053565 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 05:03:00.053578 kernel: random: crng init done Mar 12 05:03:00.053590 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 12 05:03:00.053602 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 12 05:03:00.053614 kernel: Fallback order for Node 0: 0 Mar 12 05:03:00.053626 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 12 05:03:00.053644 kernel: Policy zone: DMA32 Mar 12 05:03:00.053657 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 12 05:03:00.053669 kernel: software IO TLB: area num 16. Mar 12 05:03:00.053681 kernel: Memory: 1901604K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194752K reserved, 0K cma-reserved) Mar 12 05:03:00.053694 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 12 05:03:00.053706 kernel: Kernel/User page tables isolation: enabled Mar 12 05:03:00.053718 kernel: ftrace: allocating 37996 entries in 149 pages Mar 12 05:03:00.053730 kernel: ftrace: allocated 149 pages with 4 groups Mar 12 05:03:00.053742 kernel: Dynamic Preempt: voluntary Mar 12 05:03:00.053760 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 12 05:03:00.053773 kernel: rcu: RCU event tracing is enabled. Mar 12 05:03:00.053786 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 12 05:03:00.053805 kernel: Trampoline variant of Tasks RCU enabled. Mar 12 05:03:00.053818 kernel: Rude variant of Tasks RCU enabled. Mar 12 05:03:00.053843 kernel: Tracing variant of Tasks RCU enabled. Mar 12 05:03:00.053873 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 12 05:03:00.053885 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 12 05:03:00.053902 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 12 05:03:00.053927 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 12 05:03:00.053939 kernel: Console: colour VGA+ 80x25 Mar 12 05:03:00.053951 kernel: printk: console [tty0] enabled Mar 12 05:03:00.053977 kernel: printk: console [ttyS0] enabled Mar 12 05:03:00.053990 kernel: ACPI: Core revision 20230628 Mar 12 05:03:00.054002 kernel: APIC: Switch to symmetric I/O mode setup Mar 12 05:03:00.054014 kernel: x2apic enabled Mar 12 05:03:00.054040 kernel: APIC: Switched APIC routing to: physical x2apic Mar 12 05:03:00.054058 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 12 05:03:00.054071 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 12 05:03:00.054084 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 12 05:03:00.054097 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 12 05:03:00.054110 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 12 05:03:00.054122 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 12 05:03:00.054135 kernel: Spectre V2 : Mitigation: Retpolines Mar 12 05:03:00.054147 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 12 05:03:00.054160 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 12 05:03:00.054173 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 12 05:03:00.054191 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 12 05:03:00.054203 kernel: MDS: Mitigation: Clear CPU buffers Mar 12 05:03:00.054216 kernel: MMIO Stale Data: Unknown: No mitigations Mar 12 05:03:00.054228 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 12 05:03:00.054241 kernel: active return thunk: its_return_thunk Mar 12 05:03:00.054253 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 12 05:03:00.054266 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 12 05:03:00.054279 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 12 05:03:00.054291 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 12 05:03:00.054304 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 12 05:03:00.054316 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 12 05:03:00.054335 kernel: Freeing SMP alternatives memory: 32K Mar 12 05:03:00.054347 kernel: pid_max: default: 32768 minimum: 301 Mar 12 05:03:00.054360 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 12 05:03:00.054373 kernel: landlock: Up and running. Mar 12 05:03:00.054385 kernel: SELinux: Initializing. Mar 12 05:03:00.054398 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 12 05:03:00.054411 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 12 05:03:00.054423 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 12 05:03:00.055708 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 05:03:00.055723 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 05:03:00.055745 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 12 05:03:00.055758 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 12 05:03:00.055771 kernel: signal: max sigframe size: 1776 Mar 12 05:03:00.055784 kernel: rcu: Hierarchical SRCU implementation. Mar 12 05:03:00.055798 kernel: rcu: Max phase no-delay instances is 400. Mar 12 05:03:00.055811 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 12 05:03:00.055823 kernel: smp: Bringing up secondary CPUs ... Mar 12 05:03:00.055836 kernel: smpboot: x86: Booting SMP configuration: Mar 12 05:03:00.055849 kernel: .... node #0, CPUs: #1 Mar 12 05:03:00.055867 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 12 05:03:00.055880 kernel: smp: Brought up 1 node, 2 CPUs Mar 12 05:03:00.055893 kernel: smpboot: Max logical packages: 16 Mar 12 05:03:00.055906 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 12 05:03:00.055919 kernel: devtmpfs: initialized Mar 12 05:03:00.055931 kernel: x86/mm: Memory block size: 128MB Mar 12 05:03:00.055944 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 12 05:03:00.055957 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 12 05:03:00.055970 kernel: pinctrl core: initialized pinctrl subsystem Mar 12 05:03:00.055988 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 12 05:03:00.056001 kernel: audit: initializing netlink subsys (disabled) Mar 12 05:03:00.056014 kernel: audit: type=2000 audit(1773291778.926:1): state=initialized audit_enabled=0 res=1 Mar 12 05:03:00.056027 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 12 05:03:00.056040 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 12 05:03:00.056052 kernel: cpuidle: using governor menu Mar 12 05:03:00.056066 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 12 05:03:00.056078 kernel: dca service started, version 1.12.1 Mar 12 05:03:00.056091 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 12 05:03:00.056109 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 12 05:03:00.056122 kernel: PCI: Using configuration type 1 for base access Mar 12 05:03:00.056136 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 12 05:03:00.056149 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 12 05:03:00.056161 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 12 05:03:00.056174 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 12 05:03:00.056187 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 12 05:03:00.056200 kernel: ACPI: Added _OSI(Module Device) Mar 12 05:03:00.056213 kernel: ACPI: Added _OSI(Processor Device) Mar 12 05:03:00.056231 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 12 05:03:00.056244 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 12 05:03:00.056257 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 12 05:03:00.056269 kernel: ACPI: Interpreter enabled Mar 12 05:03:00.056282 kernel: ACPI: PM: (supports S0 S5) Mar 12 05:03:00.056295 kernel: ACPI: Using IOAPIC for interrupt routing Mar 12 05:03:00.056308 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 12 05:03:00.056330 kernel: PCI: Using E820 reservations for host bridge windows Mar 12 05:03:00.056343 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 12 05:03:00.056361 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 12 05:03:00.056696 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 12 05:03:00.056890 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 12 05:03:00.057069 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 12 05:03:00.057088 kernel: PCI host bridge to bus 0000:00 Mar 12 05:03:00.057286 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 12 05:03:00.059538 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 12 05:03:00.059722 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 12 05:03:00.059913 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 12 05:03:00.060088 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 12 05:03:00.060247 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 12 05:03:00.060406 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 12 05:03:00.061690 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 12 05:03:00.061921 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 12 05:03:00.062099 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 12 05:03:00.062772 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 12 05:03:00.062954 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 12 05:03:00.063128 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 12 05:03:00.064833 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.065032 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 12 05:03:00.065258 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.065459 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 12 05:03:00.065673 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.065851 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 12 05:03:00.066716 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.066937 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 12 05:03:00.067182 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.067376 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 12 05:03:00.069540 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.069728 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 12 05:03:00.069918 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.070098 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 12 05:03:00.070316 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 12 05:03:00.071626 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 12 05:03:00.071832 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 12 05:03:00.072018 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 12 05:03:00.072194 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 12 05:03:00.072388 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 12 05:03:00.072600 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 12 05:03:00.072797 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 12 05:03:00.072972 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 12 05:03:00.073144 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 12 05:03:00.073317 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 12 05:03:00.075690 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 12 05:03:00.075885 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 12 05:03:00.076086 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 12 05:03:00.076272 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 12 05:03:00.076531 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 12 05:03:00.076719 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 12 05:03:00.076893 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 12 05:03:00.077081 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 12 05:03:00.077261 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 12 05:03:00.078549 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 12 05:03:00.078737 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 12 05:03:00.078913 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 05:03:00.079101 kernel: pci_bus 0000:02: extended config space not accessible Mar 12 05:03:00.079336 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 12 05:03:00.081442 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 12 05:03:00.081655 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 12 05:03:00.081835 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 12 05:03:00.082034 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 12 05:03:00.082221 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 12 05:03:00.082401 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 12 05:03:00.083633 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 12 05:03:00.083851 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 05:03:00.084046 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 12 05:03:00.084259 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 12 05:03:00.085481 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 12 05:03:00.085673 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 12 05:03:00.085846 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 05:03:00.086019 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 12 05:03:00.086209 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 12 05:03:00.086402 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 05:03:00.088321 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 12 05:03:00.088556 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 12 05:03:00.088733 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 05:03:00.088914 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 12 05:03:00.089089 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 12 05:03:00.089263 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 05:03:00.089495 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 12 05:03:00.089682 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 12 05:03:00.089862 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 05:03:00.090036 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 12 05:03:00.090206 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 12 05:03:00.090376 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 05:03:00.092425 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 12 05:03:00.092442 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 12 05:03:00.092455 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 12 05:03:00.092477 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 12 05:03:00.092490 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 12 05:03:00.092525 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 12 05:03:00.092539 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 12 05:03:00.092552 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 12 05:03:00.092565 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 12 05:03:00.092578 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 12 05:03:00.092591 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 12 05:03:00.092605 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 12 05:03:00.092618 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 12 05:03:00.092631 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 12 05:03:00.092651 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 12 05:03:00.092664 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 12 05:03:00.092677 kernel: iommu: Default domain type: Translated Mar 12 05:03:00.092690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 12 05:03:00.092704 kernel: PCI: Using ACPI for IRQ routing Mar 12 05:03:00.092717 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 12 05:03:00.092730 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 12 05:03:00.092743 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 12 05:03:00.092959 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 12 05:03:00.093144 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 12 05:03:00.093315 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 12 05:03:00.093335 kernel: vgaarb: loaded Mar 12 05:03:00.093349 kernel: clocksource: Switched to clocksource kvm-clock Mar 12 05:03:00.093362 kernel: VFS: Disk quotas dquot_6.6.0 Mar 12 05:03:00.093375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 12 05:03:00.095411 kernel: pnp: PnP ACPI init Mar 12 05:03:00.095622 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 12 05:03:00.095653 kernel: pnp: PnP ACPI: found 5 devices Mar 12 05:03:00.095667 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 12 05:03:00.095681 kernel: NET: Registered PF_INET protocol family Mar 12 05:03:00.095694 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 12 05:03:00.095707 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 12 05:03:00.095720 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 12 05:03:00.095733 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 12 05:03:00.095746 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 12 05:03:00.095765 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 12 05:03:00.095779 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 12 05:03:00.095792 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 12 05:03:00.095806 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 12 05:03:00.095819 kernel: NET: Registered PF_XDP protocol family Mar 12 05:03:00.095990 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 12 05:03:00.096162 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 12 05:03:00.096335 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 12 05:03:00.096583 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 12 05:03:00.096757 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 12 05:03:00.096930 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 12 05:03:00.097099 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 12 05:03:00.097279 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 12 05:03:00.097503 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 12 05:03:00.097725 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 12 05:03:00.097919 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 12 05:03:00.098090 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 12 05:03:00.098263 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 12 05:03:00.100474 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 12 05:03:00.100673 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 12 05:03:00.100847 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 12 05:03:00.101054 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 12 05:03:00.101269 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 12 05:03:00.101466 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 12 05:03:00.101661 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 12 05:03:00.101839 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 12 05:03:00.102025 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 05:03:00.102205 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 12 05:03:00.102381 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 12 05:03:00.104735 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 12 05:03:00.105541 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 05:03:00.105774 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 12 05:03:00.105962 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 12 05:03:00.106169 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 12 05:03:00.106364 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 05:03:00.106593 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 12 05:03:00.106789 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 12 05:03:00.106987 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 12 05:03:00.107170 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 05:03:00.107352 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 12 05:03:00.109680 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 12 05:03:00.109874 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 12 05:03:00.110070 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 05:03:00.110263 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 12 05:03:00.110492 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 12 05:03:00.110697 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 12 05:03:00.110894 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 05:03:00.111091 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 12 05:03:00.111264 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 12 05:03:00.111436 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 12 05:03:00.113320 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 05:03:00.113539 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 12 05:03:00.113715 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 12 05:03:00.113905 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 12 05:03:00.114102 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 05:03:00.114280 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 12 05:03:00.114439 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 12 05:03:00.114704 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 12 05:03:00.114884 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 12 05:03:00.115039 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 12 05:03:00.115186 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 12 05:03:00.115395 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 12 05:03:00.115619 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 12 05:03:00.115785 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 12 05:03:00.115982 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 12 05:03:00.116148 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 12 05:03:00.116323 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 12 05:03:00.116586 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 12 05:03:00.116766 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 12 05:03:00.116964 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 12 05:03:00.117135 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 12 05:03:00.117310 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 12 05:03:00.117544 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 12 05:03:00.117710 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 12 05:03:00.117895 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 12 05:03:00.118060 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 12 05:03:00.118248 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 12 05:03:00.118439 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 12 05:03:00.118621 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 12 05:03:00.118797 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 12 05:03:00.118979 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 12 05:03:00.119144 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 12 05:03:00.119316 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 12 05:03:00.119552 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 12 05:03:00.119721 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 12 05:03:00.119886 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 12 05:03:00.119915 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 12 05:03:00.119930 kernel: PCI: CLS 0 bytes, default 64 Mar 12 05:03:00.119944 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 12 05:03:00.119958 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 12 05:03:00.119981 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 12 05:03:00.119995 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 12 05:03:00.120009 kernel: Initialise system trusted keyrings Mar 12 05:03:00.120023 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 12 05:03:00.120043 kernel: Key type asymmetric registered Mar 12 05:03:00.120057 kernel: Asymmetric key parser 'x509' registered Mar 12 05:03:00.120070 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 12 05:03:00.120084 kernel: io scheduler mq-deadline registered Mar 12 05:03:00.120098 kernel: io scheduler kyber registered Mar 12 05:03:00.120111 kernel: io scheduler bfq registered Mar 12 05:03:00.120299 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 12 05:03:00.120521 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 12 05:03:00.120698 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.120883 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 12 05:03:00.121078 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 12 05:03:00.121320 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.121544 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 12 05:03:00.121735 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 12 05:03:00.121944 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.122141 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 12 05:03:00.122339 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 12 05:03:00.122579 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.122754 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 12 05:03:00.122926 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 12 05:03:00.123107 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.123308 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 12 05:03:00.123548 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 12 05:03:00.123732 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.123909 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 12 05:03:00.124080 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 12 05:03:00.124251 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.124446 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 12 05:03:00.124634 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 12 05:03:00.124806 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 12 05:03:00.124828 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 12 05:03:00.124843 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 12 05:03:00.124858 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 12 05:03:00.124880 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 12 05:03:00.124895 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 12 05:03:00.124909 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 12 05:03:00.124923 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 12 05:03:00.124936 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 12 05:03:00.125133 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 12 05:03:00.125154 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 12 05:03:00.125301 kernel: rtc_cmos 00:03: registered as rtc0 Mar 12 05:03:00.125533 kernel: rtc_cmos 00:03: setting system clock to 2026-03-12T05:02:59 UTC (1773291779) Mar 12 05:03:00.125699 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 12 05:03:00.125720 kernel: intel_pstate: CPU model not supported Mar 12 05:03:00.125734 kernel: NET: Registered PF_INET6 protocol family Mar 12 05:03:00.125748 kernel: Segment Routing with IPv6 Mar 12 05:03:00.125762 kernel: In-situ OAM (IOAM) with IPv6 Mar 12 05:03:00.125775 kernel: NET: Registered PF_PACKET protocol family Mar 12 05:03:00.125789 kernel: Key type dns_resolver registered Mar 12 05:03:00.125802 kernel: IPI shorthand broadcast: enabled Mar 12 05:03:00.125824 kernel: sched_clock: Marking stable (1324024040, 236485698)->(1690637837, -130128099) Mar 12 05:03:00.125838 kernel: registered taskstats version 1 Mar 12 05:03:00.125852 kernel: Loading compiled-in X.509 certificates Mar 12 05:03:00.125866 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 67287262975845098ef9f337a0e8baa9afd38510' Mar 12 05:03:00.125879 kernel: Key type .fscrypt registered Mar 12 05:03:00.125893 kernel: Key type fscrypt-provisioning registered Mar 12 05:03:00.125906 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 12 05:03:00.125920 kernel: ima: Allocated hash algorithm: sha1 Mar 12 05:03:00.125933 kernel: ima: No architecture policies found Mar 12 05:03:00.125952 kernel: clk: Disabling unused clocks Mar 12 05:03:00.125966 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 12 05:03:00.125980 kernel: Write protecting the kernel read-only data: 36864k Mar 12 05:03:00.125994 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 12 05:03:00.126008 kernel: Run /init as init process Mar 12 05:03:00.126022 kernel: with arguments: Mar 12 05:03:00.126035 kernel: /init Mar 12 05:03:00.126049 kernel: with environment: Mar 12 05:03:00.126062 kernel: HOME=/ Mar 12 05:03:00.126075 kernel: TERM=linux Mar 12 05:03:00.126108 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 05:03:00.126125 systemd[1]: Detected virtualization kvm. Mar 12 05:03:00.126140 systemd[1]: Detected architecture x86-64. Mar 12 05:03:00.126161 systemd[1]: Running in initrd. Mar 12 05:03:00.126175 systemd[1]: No hostname configured, using default hostname. Mar 12 05:03:00.126189 systemd[1]: Hostname set to . Mar 12 05:03:00.126204 systemd[1]: Initializing machine ID from VM UUID. Mar 12 05:03:00.126224 systemd[1]: Queued start job for default target initrd.target. Mar 12 05:03:00.126238 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 05:03:00.126253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 05:03:00.126268 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 12 05:03:00.126283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 05:03:00.126298 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 12 05:03:00.126318 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 12 05:03:00.126339 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 12 05:03:00.126354 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 12 05:03:00.126370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 05:03:00.126384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 05:03:00.126414 systemd[1]: Reached target paths.target - Path Units. Mar 12 05:03:00.126430 systemd[1]: Reached target slices.target - Slice Units. Mar 12 05:03:00.126444 systemd[1]: Reached target swap.target - Swaps. Mar 12 05:03:00.126458 systemd[1]: Reached target timers.target - Timer Units. Mar 12 05:03:00.126480 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 05:03:00.126495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 05:03:00.126521 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 12 05:03:00.126536 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 12 05:03:00.126551 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 05:03:00.126565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 05:03:00.126580 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 05:03:00.126594 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 05:03:00.126609 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 12 05:03:00.126630 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 05:03:00.126644 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 12 05:03:00.126659 systemd[1]: Starting systemd-fsck-usr.service... Mar 12 05:03:00.126674 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 05:03:00.126689 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 05:03:00.126703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:03:00.126765 systemd-journald[203]: Collecting audit messages is disabled. Mar 12 05:03:00.126806 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 12 05:03:00.126821 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 05:03:00.126836 systemd[1]: Finished systemd-fsck-usr.service. Mar 12 05:03:00.126857 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 12 05:03:00.126872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 12 05:03:00.126886 kernel: Bridge firewalling registered Mar 12 05:03:00.126901 systemd-journald[203]: Journal started Mar 12 05:03:00.126943 systemd-journald[203]: Runtime Journal (/run/log/journal/8d3cdd3ba7374ca6a6c762539cf8b848) is 4.7M, max 38.0M, 33.2M free. Mar 12 05:03:00.068624 systemd-modules-load[204]: Inserted module 'overlay' Mar 12 05:03:00.112248 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 12 05:03:00.170487 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 05:03:00.177121 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 05:03:00.178173 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:03:00.188662 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 05:03:00.190771 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 05:03:00.199584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 05:03:00.202550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 12 05:03:00.216614 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 05:03:00.218319 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:03:00.234532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 05:03:00.236162 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 05:03:00.238192 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:03:00.246621 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 12 05:03:00.250624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 05:03:00.264916 dracut-cmdline[237]: dracut-dracut-053 Mar 12 05:03:00.273406 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=0e4243d51ac00bffbb09a606c7378a821ca08f30dbebc6b82c4452fcc120d7bc Mar 12 05:03:00.309709 systemd-resolved[239]: Positive Trust Anchors: Mar 12 05:03:00.309727 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 05:03:00.309783 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 05:03:00.314267 systemd-resolved[239]: Defaulting to hostname 'linux'. Mar 12 05:03:00.317290 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 05:03:00.318463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 05:03:00.390458 kernel: SCSI subsystem initialized Mar 12 05:03:00.402416 kernel: Loading iSCSI transport class v2.0-870. Mar 12 05:03:00.416441 kernel: iscsi: registered transport (tcp) Mar 12 05:03:00.444829 kernel: iscsi: registered transport (qla4xxx) Mar 12 05:03:00.445009 kernel: QLogic iSCSI HBA Driver Mar 12 05:03:00.506250 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 12 05:03:00.511645 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 12 05:03:00.555018 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 12 05:03:00.555121 kernel: device-mapper: uevent: version 1.0.3 Mar 12 05:03:00.555172 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 12 05:03:00.606502 kernel: raid6: sse2x4 gen() 7270 MB/s Mar 12 05:03:00.624431 kernel: raid6: sse2x2 gen() 5198 MB/s Mar 12 05:03:00.643219 kernel: raid6: sse2x1 gen() 5162 MB/s Mar 12 05:03:00.643331 kernel: raid6: using algorithm sse2x4 gen() 7270 MB/s Mar 12 05:03:00.662300 kernel: raid6: .... xor() 4604 MB/s, rmw enabled Mar 12 05:03:00.662443 kernel: raid6: using ssse3x2 recovery algorithm Mar 12 05:03:00.689431 kernel: xor: automatically using best checksumming function avx Mar 12 05:03:00.893873 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 12 05:03:00.909406 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 12 05:03:00.916617 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 05:03:00.943945 systemd-udevd[422]: Using default interface naming scheme 'v255'. Mar 12 05:03:00.951533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 05:03:00.961639 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 12 05:03:00.983164 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Mar 12 05:03:01.025372 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 05:03:01.037671 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 05:03:01.159013 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 05:03:01.167736 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 12 05:03:01.194423 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 12 05:03:01.200767 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 05:03:01.202414 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 05:03:01.204736 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 05:03:01.212641 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 12 05:03:01.247253 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 12 05:03:01.307424 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 12 05:03:01.307756 kernel: cryptd: max_cpu_qlen set to 1000 Mar 12 05:03:01.322443 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 12 05:03:01.351936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 05:03:01.367809 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 12 05:03:01.367867 kernel: GPT:17805311 != 125829119 Mar 12 05:03:01.367890 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 12 05:03:01.367909 kernel: GPT:17805311 != 125829119 Mar 12 05:03:01.367927 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 12 05:03:01.367944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:03:01.367962 kernel: AVX version of gcm_enc/dec engaged. Mar 12 05:03:01.367980 kernel: AES CTR mode by8 optimization enabled Mar 12 05:03:01.367998 kernel: libata version 3.00 loaded. Mar 12 05:03:01.352152 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:03:01.367282 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 05:03:01.370788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 05:03:01.370990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:03:01.373794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:03:01.384681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:03:01.389139 kernel: ACPI: bus type USB registered Mar 12 05:03:01.393445 kernel: usbcore: registered new interface driver usbfs Mar 12 05:03:01.400410 kernel: usbcore: registered new interface driver hub Mar 12 05:03:01.403450 kernel: usbcore: registered new device driver usb Mar 12 05:03:01.429434 kernel: ahci 0000:00:1f.2: version 3.0 Mar 12 05:03:01.438729 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 12 05:03:01.457602 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 12 05:03:01.457961 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 12 05:03:01.458181 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 12 05:03:01.460660 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 12 05:03:01.460941 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 12 05:03:01.461184 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 12 05:03:01.462424 kernel: hub 1-0:1.0: USB hub found Mar 12 05:03:01.462688 kernel: hub 1-0:1.0: 4 ports detected Mar 12 05:03:01.465325 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 12 05:03:01.466478 kernel: hub 2-0:1.0: USB hub found Mar 12 05:03:01.466722 kernel: hub 2-0:1.0: 4 ports detected Mar 12 05:03:01.467726 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 12 05:03:01.577327 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 12 05:03:01.577730 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 12 05:03:01.577948 kernel: BTRFS: device fsid 94537345-7f6b-4b2a-965f-248bd6f0b7eb devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (481) Mar 12 05:03:01.577971 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (467) Mar 12 05:03:01.577990 kernel: scsi host0: ahci Mar 12 05:03:01.578274 kernel: scsi host1: ahci Mar 12 05:03:01.578555 kernel: scsi host2: ahci Mar 12 05:03:01.578780 kernel: scsi host3: ahci Mar 12 05:03:01.578986 kernel: scsi host4: ahci Mar 12 05:03:01.579203 kernel: scsi host5: ahci Mar 12 05:03:01.579433 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 12 05:03:01.579483 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 12 05:03:01.579505 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 12 05:03:01.579524 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 12 05:03:01.579542 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 12 05:03:01.579560 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 12 05:03:01.578411 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:03:01.586343 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 12 05:03:01.598715 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 05:03:01.604661 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 12 05:03:01.605507 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 12 05:03:01.618645 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 12 05:03:01.623622 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 12 05:03:01.629626 disk-uuid[564]: Primary Header is updated. Mar 12 05:03:01.629626 disk-uuid[564]: Secondary Entries is updated. Mar 12 05:03:01.629626 disk-uuid[564]: Secondary Header is updated. Mar 12 05:03:01.636454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:03:01.646417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:03:01.652645 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:03:01.704570 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 12 05:03:01.801432 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 12 05:03:01.803807 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 12 05:03:01.803845 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 12 05:03:01.806515 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 12 05:03:01.808249 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 12 05:03:01.808431 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 12 05:03:01.846449 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 12 05:03:01.853820 kernel: usbcore: registered new interface driver usbhid Mar 12 05:03:01.853861 kernel: usbhid: USB HID core driver Mar 12 05:03:01.862448 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 12 05:03:01.862498 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 12 05:03:02.648429 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 12 05:03:02.648528 disk-uuid[565]: The operation has completed successfully. Mar 12 05:03:02.701920 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 12 05:03:02.702121 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 12 05:03:02.726620 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 12 05:03:02.738986 sh[584]: Success Mar 12 05:03:02.758199 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 12 05:03:02.824286 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 12 05:03:02.836936 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 12 05:03:02.840078 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 12 05:03:02.875468 kernel: BTRFS info (device dm-0): first mount of filesystem 94537345-7f6b-4b2a-965f-248bd6f0b7eb Mar 12 05:03:02.875539 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:03:02.875575 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 12 05:03:02.876067 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 12 05:03:02.879406 kernel: BTRFS info (device dm-0): using free space tree Mar 12 05:03:02.890321 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 12 05:03:02.891969 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 12 05:03:02.900662 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 12 05:03:02.905933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 12 05:03:02.918906 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:03:02.918966 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:03:02.920793 kernel: BTRFS info (device vda6): using free space tree Mar 12 05:03:02.931425 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 05:03:02.945088 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 12 05:03:02.949291 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:03:02.954676 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 12 05:03:02.961034 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 12 05:03:03.116529 ignition[668]: Ignition 2.19.0 Mar 12 05:03:03.116554 ignition[668]: Stage: fetch-offline Mar 12 05:03:03.118653 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 05:03:03.116652 ignition[668]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:03:03.120757 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 05:03:03.116678 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:03:03.116895 ignition[668]: parsed url from cmdline: "" Mar 12 05:03:03.116911 ignition[668]: no config URL provided Mar 12 05:03:03.116922 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 05:03:03.116938 ignition[668]: no config at "/usr/lib/ignition/user.ign" Mar 12 05:03:03.116947 ignition[668]: failed to fetch config: resource requires networking Mar 12 05:03:03.117310 ignition[668]: Ignition finished successfully Mar 12 05:03:03.131652 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 05:03:03.168159 systemd-networkd[771]: lo: Link UP Mar 12 05:03:03.168179 systemd-networkd[771]: lo: Gained carrier Mar 12 05:03:03.170746 systemd-networkd[771]: Enumeration completed Mar 12 05:03:03.170890 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 05:03:03.171866 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:03:03.171872 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 05:03:03.172948 systemd[1]: Reached target network.target - Network. Mar 12 05:03:03.173677 systemd-networkd[771]: eth0: Link UP Mar 12 05:03:03.173683 systemd-networkd[771]: eth0: Gained carrier Mar 12 05:03:03.173701 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:03:03.186650 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 12 05:03:03.199529 systemd-networkd[771]: eth0: DHCPv4 address 10.230.35.22/30, gateway 10.230.35.21 acquired from 10.230.35.21 Mar 12 05:03:03.210054 ignition[774]: Ignition 2.19.0 Mar 12 05:03:03.210075 ignition[774]: Stage: fetch Mar 12 05:03:03.210404 ignition[774]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:03:03.212374 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:03:03.213148 ignition[774]: parsed url from cmdline: "" Mar 12 05:03:03.213156 ignition[774]: no config URL provided Mar 12 05:03:03.213167 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Mar 12 05:03:03.213194 ignition[774]: no config at "/usr/lib/ignition/user.ign" Mar 12 05:03:03.213504 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 12 05:03:03.214901 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 12 05:03:03.214947 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 12 05:03:03.232900 ignition[774]: GET result: OK Mar 12 05:03:03.233435 ignition[774]: parsing config with SHA512: 9141a00487d7b999c94d17dbbbfba419f97d30b5567aeff3cbca762c55f05fb6c0882de140bb008c298c7036ca94e169292d054699f4b4df203b2f839acce0f3 Mar 12 05:03:03.240033 unknown[774]: fetched base config from "system" Mar 12 05:03:03.240054 unknown[774]: fetched base config from "system" Mar 12 05:03:03.240739 ignition[774]: fetch: fetch complete Mar 12 05:03:03.240065 unknown[774]: fetched user config from "openstack" Mar 12 05:03:03.240748 ignition[774]: fetch: fetch passed Mar 12 05:03:03.242824 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 12 05:03:03.240823 ignition[774]: Ignition finished successfully Mar 12 05:03:03.251727 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 12 05:03:03.278600 ignition[781]: Ignition 2.19.0 Mar 12 05:03:03.279757 ignition[781]: Stage: kargs Mar 12 05:03:03.280021 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:03:03.280042 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:03:03.283143 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 12 05:03:03.281249 ignition[781]: kargs: kargs passed Mar 12 05:03:03.281341 ignition[781]: Ignition finished successfully Mar 12 05:03:03.296748 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 12 05:03:03.320348 ignition[788]: Ignition 2.19.0 Mar 12 05:03:03.320377 ignition[788]: Stage: disks Mar 12 05:03:03.320720 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 12 05:03:03.320743 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:03:03.322094 ignition[788]: disks: disks passed Mar 12 05:03:03.324451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 12 05:03:03.322199 ignition[788]: Ignition finished successfully Mar 12 05:03:03.325790 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 12 05:03:03.326947 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 12 05:03:03.328470 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 05:03:03.330030 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 05:03:03.331726 systemd[1]: Reached target basic.target - Basic System. Mar 12 05:03:03.339676 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 12 05:03:03.360828 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 12 05:03:03.364592 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 12 05:03:03.370530 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 12 05:03:03.493447 kernel: EXT4-fs (vda9): mounted filesystem f90926b1-4cc2-4a2d-8c45-4ec584c98779 r/w with ordered data mode. Quota mode: none. Mar 12 05:03:03.494644 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 12 05:03:03.496932 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 12 05:03:03.504570 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 05:03:03.508542 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 12 05:03:03.509723 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 12 05:03:03.511711 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 12 05:03:03.513490 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 12 05:03:03.513535 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 05:03:03.526699 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 12 05:03:03.529248 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (804) Mar 12 05:03:03.532914 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:03:03.534629 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 12 05:03:03.540791 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:03:03.540848 kernel: BTRFS info (device vda6): using free space tree Mar 12 05:03:03.552435 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 05:03:03.556061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 05:03:03.622679 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Mar 12 05:03:03.630598 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Mar 12 05:03:03.642546 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Mar 12 05:03:03.649773 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Mar 12 05:03:03.766696 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 12 05:03:03.775552 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 12 05:03:03.778678 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 12 05:03:03.793447 kernel: BTRFS info (device vda6): last unmount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:03:03.823287 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 12 05:03:03.826315 ignition[922]: INFO : Ignition 2.19.0 Mar 12 05:03:03.826315 ignition[922]: INFO : Stage: mount Mar 12 05:03:03.828481 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 05:03:03.828481 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:03:03.828481 ignition[922]: INFO : mount: mount passed Mar 12 05:03:03.828481 ignition[922]: INFO : Ignition finished successfully Mar 12 05:03:03.831133 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 12 05:03:03.869852 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 12 05:03:05.098748 systemd-networkd[771]: eth0: Gained IPv6LL Mar 12 05:03:06.302164 systemd-networkd[771]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88c5:24:19ff:fee6:2316/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88c5:24:19ff:fee6:2316/64 assigned by NDisc. Mar 12 05:03:06.302183 systemd-networkd[771]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 12 05:03:10.707530 coreos-metadata[806]: Mar 12 05:03:10.707 WARN failed to locate config-drive, using the metadata service API instead Mar 12 05:03:10.734033 coreos-metadata[806]: Mar 12 05:03:10.733 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 12 05:03:10.751114 coreos-metadata[806]: Mar 12 05:03:10.751 INFO Fetch successful Mar 12 05:03:10.752452 coreos-metadata[806]: Mar 12 05:03:10.752 INFO wrote hostname srv-rxxiv.gb1.brightbox.com to /sysroot/etc/hostname Mar 12 05:03:10.755529 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 12 05:03:10.755714 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 12 05:03:10.766542 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 12 05:03:10.789625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 12 05:03:10.802021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 12 05:03:10.802068 kernel: BTRFS info (device vda6): first mount of filesystem 0ebf6eb2-dc55-4706-86d1-78d37843d203 Mar 12 05:03:10.804476 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 12 05:03:10.807430 kernel: BTRFS info (device vda6): using free space tree Mar 12 05:03:10.813429 kernel: BTRFS info (device vda6): auto enabling async discard Mar 12 05:03:10.814931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 12 05:03:10.853025 ignition[956]: INFO : Ignition 2.19.0 Mar 12 05:03:10.853025 ignition[956]: INFO : Stage: files Mar 12 05:03:10.854895 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 05:03:10.854895 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:03:10.854895 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Mar 12 05:03:10.857922 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 12 05:03:10.857922 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 12 05:03:10.860641 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 12 05:03:10.861860 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 12 05:03:10.862924 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 12 05:03:10.862581 unknown[956]: wrote ssh authorized keys file for user: core Mar 12 05:03:10.864965 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 05:03:10.864965 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 12 05:03:11.445534 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 12 05:03:11.812889 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 12 05:03:11.814369 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 05:03:11.814369 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 12 05:03:12.138684 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 12 05:03:12.393198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 12 05:03:12.393198 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 05:03:12.395964 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 12 05:03:12.700298 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 12 05:03:14.141471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 12 05:03:14.141471 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 12 05:03:14.145930 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 05:03:14.145930 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 12 05:03:14.145930 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 12 05:03:14.145930 ignition[956]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 12 05:03:14.145930 ignition[956]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 12 05:03:14.145930 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 12 05:03:14.145930 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 12 05:03:14.145930 ignition[956]: INFO : files: files passed Mar 12 05:03:14.145930 ignition[956]: INFO : Ignition finished successfully Mar 12 05:03:14.152505 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 12 05:03:14.162690 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 12 05:03:14.166244 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 12 05:03:14.173855 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 12 05:03:14.174018 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 12 05:03:14.191631 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 05:03:14.194521 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 12 05:03:14.195667 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 12 05:03:14.198925 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 05:03:14.200081 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 12 05:03:14.217125 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 12 05:03:14.260413 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 12 05:03:14.260583 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 12 05:03:14.262920 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 12 05:03:14.264049 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 12 05:03:14.265843 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 12 05:03:14.273674 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 12 05:03:14.292714 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 05:03:14.301640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 12 05:03:14.318406 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 12 05:03:14.319406 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 05:03:14.320369 systemd[1]: Stopped target timers.target - Timer Units. Mar 12 05:03:14.323167 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 12 05:03:14.323360 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 12 05:03:14.325730 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 12 05:03:14.326783 systemd[1]: Stopped target basic.target - Basic System. Mar 12 05:03:14.328473 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 12 05:03:14.330780 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 12 05:03:14.332466 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 12 05:03:14.334178 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 12 05:03:14.335764 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 12 05:03:14.337347 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 12 05:03:14.338954 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 12 05:03:14.340522 systemd[1]: Stopped target swap.target - Swaps. Mar 12 05:03:14.341849 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 12 05:03:14.342038 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 12 05:03:14.343924 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 12 05:03:14.345031 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 05:03:14.346652 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 12 05:03:14.346815 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 05:03:14.348258 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 12 05:03:14.348444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 12 05:03:14.350489 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 12 05:03:14.350662 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 12 05:03:14.352587 systemd[1]: ignition-files.service: Deactivated successfully. Mar 12 05:03:14.352741 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 12 05:03:14.364654 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 12 05:03:14.366334 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 12 05:03:14.367532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 05:03:14.372701 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 12 05:03:14.374202 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 12 05:03:14.381784 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 05:03:14.387296 ignition[1008]: INFO : Ignition 2.19.0 Mar 12 05:03:14.387296 ignition[1008]: INFO : Stage: umount Mar 12 05:03:14.387296 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 12 05:03:14.387296 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 12 05:03:14.387296 ignition[1008]: INFO : umount: umount passed Mar 12 05:03:14.387296 ignition[1008]: INFO : Ignition finished successfully Mar 12 05:03:14.387638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 12 05:03:14.387910 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 12 05:03:14.395904 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 12 05:03:14.396897 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 12 05:03:14.400192 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 12 05:03:14.401268 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 12 05:03:14.405218 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 12 05:03:14.405313 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 12 05:03:14.406095 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 12 05:03:14.406184 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 12 05:03:14.407542 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 12 05:03:14.407611 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 12 05:03:14.409874 systemd[1]: Stopped target network.target - Network. Mar 12 05:03:14.410839 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 12 05:03:14.410922 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 12 05:03:14.411743 systemd[1]: Stopped target paths.target - Path Units. Mar 12 05:03:14.412366 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 12 05:03:14.419507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 05:03:14.421498 systemd[1]: Stopped target slices.target - Slice Units. Mar 12 05:03:14.422962 systemd[1]: Stopped target sockets.target - Socket Units. Mar 12 05:03:14.423716 systemd[1]: iscsid.socket: Deactivated successfully. Mar 12 05:03:14.423794 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 12 05:03:14.425680 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 12 05:03:14.425744 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 12 05:03:14.426793 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 12 05:03:14.426888 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 12 05:03:14.427616 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 12 05:03:14.427706 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 12 05:03:14.430129 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 12 05:03:14.432062 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 12 05:03:14.435515 systemd-networkd[771]: eth0: DHCPv6 lease lost Mar 12 05:03:14.436740 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 12 05:03:14.437676 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 12 05:03:14.437821 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 12 05:03:14.439248 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 12 05:03:14.439660 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 12 05:03:14.442801 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 12 05:03:14.443029 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 12 05:03:14.447899 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 12 05:03:14.448292 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 12 05:03:14.449974 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 12 05:03:14.450055 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 12 05:03:14.464646 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 12 05:03:14.465378 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 12 05:03:14.465498 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 12 05:03:14.466339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 05:03:14.467546 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:03:14.469310 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 12 05:03:14.469484 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 12 05:03:14.470976 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 12 05:03:14.471047 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 05:03:14.472726 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 05:03:14.484921 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 12 05:03:14.486528 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 05:03:14.489451 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 12 05:03:14.490452 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 12 05:03:14.493977 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 12 05:03:14.495030 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 12 05:03:14.496521 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 12 05:03:14.496580 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 05:03:14.498465 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 12 05:03:14.498539 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 12 05:03:14.499379 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 12 05:03:14.500870 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 12 05:03:14.502221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 12 05:03:14.502294 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 12 05:03:14.513651 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 12 05:03:14.514991 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 12 05:03:14.515081 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 05:03:14.515953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 12 05:03:14.516024 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:03:14.525291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 12 05:03:14.525566 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 12 05:03:14.527670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 12 05:03:14.541730 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 12 05:03:14.551244 systemd[1]: Switching root. Mar 12 05:03:14.584067 systemd-journald[203]: Journal stopped Mar 12 05:03:16.130708 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 12 05:03:16.130838 kernel: SELinux: policy capability network_peer_controls=1 Mar 12 05:03:16.130866 kernel: SELinux: policy capability open_perms=1 Mar 12 05:03:16.130887 kernel: SELinux: policy capability extended_socket_class=1 Mar 12 05:03:16.130906 kernel: SELinux: policy capability always_check_network=0 Mar 12 05:03:16.130942 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 12 05:03:16.130965 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 12 05:03:16.130984 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 12 05:03:16.131016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 12 05:03:16.131036 kernel: audit: type=1403 audit(1773291794.891:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 12 05:03:16.131058 systemd[1]: Successfully loaded SELinux policy in 58.307ms. Mar 12 05:03:16.131104 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.327ms. Mar 12 05:03:16.131146 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 12 05:03:16.131171 systemd[1]: Detected virtualization kvm. Mar 12 05:03:16.131206 systemd[1]: Detected architecture x86-64. Mar 12 05:03:16.131229 systemd[1]: Detected first boot. Mar 12 05:03:16.131249 systemd[1]: Hostname set to . Mar 12 05:03:16.131270 systemd[1]: Initializing machine ID from VM UUID. Mar 12 05:03:16.131292 zram_generator::config[1050]: No configuration found. Mar 12 05:03:16.131313 systemd[1]: Populated /etc with preset unit settings. Mar 12 05:03:16.131334 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 12 05:03:16.131367 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 12 05:03:16.131434 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 12 05:03:16.131461 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 12 05:03:16.131482 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 12 05:03:16.131505 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 12 05:03:16.131535 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 12 05:03:16.131558 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 12 05:03:16.131579 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 12 05:03:16.131601 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 12 05:03:16.131638 systemd[1]: Created slice user.slice - User and Session Slice. Mar 12 05:03:16.131663 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 12 05:03:16.131684 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 12 05:03:16.131706 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 12 05:03:16.131726 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 12 05:03:16.131747 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 12 05:03:16.131769 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 12 05:03:16.131789 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 12 05:03:16.131810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 12 05:03:16.131849 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 12 05:03:16.131871 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 12 05:03:16.131893 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 12 05:03:16.131913 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 12 05:03:16.131941 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 12 05:03:16.131962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 12 05:03:16.131996 systemd[1]: Reached target slices.target - Slice Units. Mar 12 05:03:16.132019 systemd[1]: Reached target swap.target - Swaps. Mar 12 05:03:16.132052 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 12 05:03:16.132072 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 12 05:03:16.132116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 12 05:03:16.132154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 12 05:03:16.132203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 12 05:03:16.132227 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 12 05:03:16.132249 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 12 05:03:16.132270 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 12 05:03:16.132292 systemd[1]: Mounting media.mount - External Media Directory... Mar 12 05:03:16.132313 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:16.132334 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 12 05:03:16.132355 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 12 05:03:16.132377 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 12 05:03:16.133453 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 12 05:03:16.133479 systemd[1]: Reached target machines.target - Containers. Mar 12 05:03:16.133500 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 12 05:03:16.133521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:03:16.133551 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 12 05:03:16.133572 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 12 05:03:16.133594 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 05:03:16.133616 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 05:03:16.133655 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 05:03:16.133680 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 12 05:03:16.133701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 05:03:16.133722 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 12 05:03:16.133746 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 12 05:03:16.133768 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 12 05:03:16.133789 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 12 05:03:16.133817 systemd[1]: Stopped systemd-fsck-usr.service. Mar 12 05:03:16.133852 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 12 05:03:16.133874 kernel: loop: module loaded Mar 12 05:03:16.133895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 12 05:03:16.133916 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 12 05:03:16.133938 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 12 05:03:16.134000 systemd-journald[1143]: Collecting audit messages is disabled. Mar 12 05:03:16.134047 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 12 05:03:16.134070 systemd-journald[1143]: Journal started Mar 12 05:03:16.134140 systemd-journald[1143]: Runtime Journal (/run/log/journal/8d3cdd3ba7374ca6a6c762539cf8b848) is 4.7M, max 38.0M, 33.2M free. Mar 12 05:03:15.736041 systemd[1]: Queued start job for default target multi-user.target. Mar 12 05:03:15.760886 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 12 05:03:15.761695 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 12 05:03:16.143021 systemd[1]: verity-setup.service: Deactivated successfully. Mar 12 05:03:16.143072 systemd[1]: Stopped verity-setup.service. Mar 12 05:03:16.144652 kernel: ACPI: bus type drm_connector registered Mar 12 05:03:16.146292 kernel: fuse: init (API version 7.39) Mar 12 05:03:16.146337 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:16.159497 systemd[1]: Started systemd-journald.service - Journal Service. Mar 12 05:03:16.165818 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 12 05:03:16.166828 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 12 05:03:16.167779 systemd[1]: Mounted media.mount - External Media Directory. Mar 12 05:03:16.168711 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 12 05:03:16.169594 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 12 05:03:16.170758 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 12 05:03:16.172809 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 12 05:03:16.174147 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 12 05:03:16.174750 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 12 05:03:16.176954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 05:03:16.177183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 05:03:16.178614 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 05:03:16.178940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 05:03:16.181660 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 12 05:03:16.182916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 05:03:16.183188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 05:03:16.184571 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 12 05:03:16.184789 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 12 05:03:16.186108 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 05:03:16.186317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 05:03:16.187600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 12 05:03:16.188790 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 12 05:03:16.190021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 12 05:03:16.208063 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 12 05:03:16.219520 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 12 05:03:16.223547 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 12 05:03:16.224373 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 12 05:03:16.224465 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 12 05:03:16.229908 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 12 05:03:16.240226 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 12 05:03:16.247561 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 12 05:03:16.249695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:03:16.261608 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 12 05:03:16.277661 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 12 05:03:16.278644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 05:03:16.286049 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 12 05:03:16.287031 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 05:03:16.298636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 05:03:16.306670 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 12 05:03:16.312595 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 12 05:03:16.318285 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 12 05:03:16.320061 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 12 05:03:16.322963 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 12 05:03:16.346224 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 12 05:03:16.350789 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 12 05:03:16.360735 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 12 05:03:16.374560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:03:16.378908 systemd-journald[1143]: Time spent on flushing to /var/log/journal/8d3cdd3ba7374ca6a6c762539cf8b848 is 102.368ms for 1144 entries. Mar 12 05:03:16.378908 systemd-journald[1143]: System Journal (/var/log/journal/8d3cdd3ba7374ca6a6c762539cf8b848) is 8.0M, max 584.8M, 576.8M free. Mar 12 05:03:16.502562 systemd-journald[1143]: Received client request to flush runtime journal. Mar 12 05:03:16.502621 kernel: loop0: detected capacity change from 0 to 219192 Mar 12 05:03:16.502651 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 12 05:03:16.388480 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 12 05:03:16.389686 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 12 05:03:16.484115 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 12 05:03:16.493820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 12 05:03:16.511717 kernel: loop1: detected capacity change from 0 to 8 Mar 12 05:03:16.513361 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 12 05:03:16.555608 kernel: loop2: detected capacity change from 0 to 142488 Mar 12 05:03:16.623873 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 12 05:03:16.626482 kernel: loop3: detected capacity change from 0 to 140768 Mar 12 05:03:16.625455 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 12 05:03:16.649227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 12 05:03:16.684376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 12 05:03:16.703089 kernel: loop4: detected capacity change from 0 to 219192 Mar 12 05:03:16.704658 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 12 05:03:16.727748 udevadm[1207]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 12 05:03:16.742777 kernel: loop5: detected capacity change from 0 to 8 Mar 12 05:03:16.750746 kernel: loop6: detected capacity change from 0 to 142488 Mar 12 05:03:16.796549 kernel: loop7: detected capacity change from 0 to 140768 Mar 12 05:03:16.830586 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 12 05:03:16.831590 (sd-merge)[1208]: Merged extensions into '/usr'. Mar 12 05:03:16.842288 systemd[1]: Reloading requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Mar 12 05:03:16.842314 systemd[1]: Reloading... Mar 12 05:03:17.052495 zram_generator::config[1233]: No configuration found. Mar 12 05:03:17.074759 ldconfig[1178]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 12 05:03:17.281997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:03:17.355010 systemd[1]: Reloading finished in 511 ms. Mar 12 05:03:17.391306 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 12 05:03:17.392718 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 12 05:03:17.406680 systemd[1]: Starting ensure-sysext.service... Mar 12 05:03:17.410661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 12 05:03:17.427685 systemd[1]: Reloading requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... Mar 12 05:03:17.427718 systemd[1]: Reloading... Mar 12 05:03:17.474485 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 12 05:03:17.475124 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 12 05:03:17.477278 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 12 05:03:17.477827 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Mar 12 05:03:17.478037 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Mar 12 05:03:17.492775 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 05:03:17.492788 systemd-tmpfiles[1291]: Skipping /boot Mar 12 05:03:17.514722 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. Mar 12 05:03:17.514747 systemd-tmpfiles[1291]: Skipping /boot Mar 12 05:03:17.547448 zram_generator::config[1315]: No configuration found. Mar 12 05:03:17.751742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:03:17.823380 systemd[1]: Reloading finished in 395 ms. Mar 12 05:03:17.846559 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 12 05:03:17.852019 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 12 05:03:17.865663 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 05:03:17.872642 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 12 05:03:17.882302 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 12 05:03:17.888630 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 12 05:03:17.899604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 12 05:03:17.909562 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 12 05:03:17.923472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:17.923789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:03:17.931295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 12 05:03:17.935291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 12 05:03:17.938433 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 12 05:03:17.939379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:03:17.939684 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:17.954357 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 12 05:03:17.958415 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 12 05:03:17.967864 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:17.968339 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:03:17.968763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:03:17.976831 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 12 05:03:17.977700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:17.984204 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 12 05:03:17.993914 systemd[1]: Finished ensure-sysext.service. Mar 12 05:03:17.997168 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:17.997498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 12 05:03:18.004687 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 12 05:03:18.005576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 12 05:03:18.015582 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 12 05:03:18.016365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 12 05:03:18.059538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 12 05:03:18.060391 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 12 05:03:18.072084 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 12 05:03:18.075688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 12 05:03:18.076018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 12 05:03:18.079433 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 12 05:03:18.079814 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Mar 12 05:03:18.083976 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 12 05:03:18.084258 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 12 05:03:18.086842 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 12 05:03:18.087276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 12 05:03:18.091356 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 12 05:03:18.093113 augenrules[1410]: No rules Mar 12 05:03:18.094315 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 05:03:18.097740 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 12 05:03:18.099801 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 12 05:03:18.120059 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 12 05:03:18.135479 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 12 05:03:18.142807 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 12 05:03:18.241484 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 12 05:03:18.352105 systemd-resolved[1383]: Positive Trust Anchors: Mar 12 05:03:18.355548 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 12 05:03:18.355600 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 12 05:03:18.370629 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 12 05:03:18.370631 systemd-resolved[1383]: Using system hostname 'srv-rxxiv.gb1.brightbox.com'. Mar 12 05:03:18.372795 systemd[1]: Reached target time-set.target - System Time Set. Mar 12 05:03:18.378704 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 12 05:03:18.380125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 12 05:03:18.380789 systemd-networkd[1429]: lo: Link UP Mar 12 05:03:18.381467 systemd-networkd[1429]: lo: Gained carrier Mar 12 05:03:18.386602 systemd-networkd[1429]: Enumeration completed Mar 12 05:03:18.386782 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 12 05:03:18.387291 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:03:18.387298 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 12 05:03:18.389336 systemd[1]: Reached target network.target - Network. Mar 12 05:03:18.390824 systemd-networkd[1429]: eth0: Link UP Mar 12 05:03:18.390955 systemd-networkd[1429]: eth0: Gained carrier Mar 12 05:03:18.391077 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 12 05:03:18.394135 systemd-timesyncd[1404]: No network connectivity, watching for changes. Mar 12 05:03:18.398716 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 12 05:03:18.414486 systemd-networkd[1429]: eth0: DHCPv4 address 10.230.35.22/30, gateway 10.230.35.21 acquired from 10.230.35.21 Mar 12 05:03:18.415705 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Mar 12 05:03:18.467493 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1435) Mar 12 05:03:18.473413 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 12 05:03:18.481421 kernel: mousedev: PS/2 mouse device common for all mice Mar 12 05:03:18.492451 kernel: ACPI: button: Power Button [PWRF] Mar 12 05:03:18.538575 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 12 05:03:18.547673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 12 05:03:18.569660 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 12 05:03:18.594109 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 12 05:03:18.594551 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 12 05:03:18.594588 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 12 05:03:18.597760 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 12 05:03:18.688846 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 12 05:03:18.878363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 12 05:03:18.897053 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 12 05:03:18.903736 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 12 05:03:18.934468 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 05:03:18.970156 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 12 05:03:18.972144 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 12 05:03:18.973046 systemd[1]: Reached target sysinit.target - System Initialization. Mar 12 05:03:18.973978 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 12 05:03:18.975045 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 12 05:03:18.976201 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 12 05:03:18.977159 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 12 05:03:18.977997 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 12 05:03:18.978806 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 12 05:03:18.978861 systemd[1]: Reached target paths.target - Path Units. Mar 12 05:03:18.979552 systemd[1]: Reached target timers.target - Timer Units. Mar 12 05:03:18.981233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 12 05:03:18.984124 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 12 05:03:18.989217 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 12 05:03:18.991820 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 12 05:03:18.993362 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 12 05:03:18.994272 systemd[1]: Reached target sockets.target - Socket Units. Mar 12 05:03:18.995043 systemd[1]: Reached target basic.target - Basic System. Mar 12 05:03:18.995763 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 12 05:03:18.995815 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 12 05:03:18.999526 systemd[1]: Starting containerd.service - containerd container runtime... Mar 12 05:03:19.012497 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 12 05:03:19.012655 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 12 05:03:19.018729 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 12 05:03:19.032522 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 12 05:03:19.035173 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 12 05:03:19.037512 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 12 05:03:19.039637 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 12 05:03:19.043857 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 12 05:03:19.049111 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 12 05:03:19.059466 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 12 05:03:19.066923 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 12 05:03:19.070585 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 12 05:03:19.071416 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 12 05:03:19.076656 systemd[1]: Starting update-engine.service - Update Engine... Mar 12 05:03:19.086620 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 12 05:03:19.096500 jq[1473]: false Mar 12 05:03:19.100106 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 12 05:03:19.100437 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 12 05:03:19.102247 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 12 05:03:19.102530 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 12 05:03:19.117929 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 12 05:03:19.148854 dbus-daemon[1470]: [system] SELinux support is enabled Mar 12 05:03:19.156580 extend-filesystems[1474]: Found loop4 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found loop5 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found loop6 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found loop7 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda1 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda2 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda3 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found usr Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda4 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda6 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda7 Mar 12 05:03:19.156580 extend-filesystems[1474]: Found vda9 Mar 12 05:03:19.156580 extend-filesystems[1474]: Checking size of /dev/vda9 Mar 12 05:03:19.155827 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 12 05:03:19.209597 jq[1482]: true Mar 12 05:03:19.159851 dbus-daemon[1470]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1429 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 12 05:03:19.180322 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 12 05:03:19.186496 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 12 05:03:19.180431 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 12 05:03:19.181333 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 12 05:03:19.181371 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 12 05:03:19.199643 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 12 05:03:19.221467 jq[1496]: true Mar 12 05:03:19.221836 extend-filesystems[1474]: Resized partition /dev/vda9 Mar 12 05:03:19.228627 tar[1491]: linux-amd64/LICENSE Mar 12 05:03:19.228627 tar[1491]: linux-amd64/helm Mar 12 05:03:19.234245 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) Mar 12 05:03:19.237555 update_engine[1480]: I20260312 05:03:19.227342 1480 main.cc:92] Flatcar Update Engine starting Mar 12 05:03:19.245966 systemd[1]: Started update-engine.service - Update Engine. Mar 12 05:03:19.246815 update_engine[1480]: I20260312 05:03:19.246711 1480 update_check_scheduler.cc:74] Next update check in 11m19s Mar 12 05:03:19.257472 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 12 05:03:19.260129 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 12 05:03:19.269367 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 12 05:03:19.271256 systemd[1]: motdgen.service: Deactivated successfully. Mar 12 05:03:19.272787 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 12 05:03:19.355912 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1435) Mar 12 05:03:19.395992 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (Power Button) Mar 12 05:03:19.396048 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 12 05:03:19.398535 systemd-logind[1479]: New seat seat0. Mar 12 05:03:19.402065 systemd[1]: Started systemd-logind.service - User Login Management. Mar 12 05:03:19.455962 locksmithd[1522]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 12 05:03:19.476508 bash[1527]: Updated "/home/core/.ssh/authorized_keys" Mar 12 05:03:19.458965 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 12 05:03:19.483818 systemd[1]: Starting sshkeys.service... Mar 12 05:03:19.542457 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 12 05:03:19.554823 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 12 05:03:19.620980 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 12 05:03:19.621517 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 12 05:03:19.625915 dbus-daemon[1470]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1504 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 12 05:03:19.643311 systemd[1]: Starting polkit.service - Authorization Manager... Mar 12 05:03:19.700444 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 12 05:03:19.702622 polkitd[1542]: Started polkitd version 121 Mar 12 05:03:19.728121 polkitd[1542]: Loading rules from directory /etc/polkit-1/rules.d Mar 12 05:03:19.728226 polkitd[1542]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 12 05:03:19.735375 polkitd[1542]: Finished loading, compiling and executing 2 rules Mar 12 05:03:19.739009 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 12 05:03:19.739264 systemd[1]: Started polkit.service - Authorization Manager. Mar 12 05:03:19.741438 extend-filesystems[1510]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 12 05:03:19.741438 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 12 05:03:19.741438 extend-filesystems[1510]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 12 05:03:19.755733 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Mar 12 05:03:19.748675 polkitd[1542]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 12 05:03:19.743191 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 12 05:03:19.743592 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 12 05:03:19.788900 systemd-hostnamed[1504]: Hostname set to (static) Mar 12 05:03:19.793112 containerd[1498]: time="2026-03-12T05:03:19.791807930Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 12 05:03:19.841947 containerd[1498]: time="2026-03-12T05:03:19.841052019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:03:19.844067 containerd[1498]: time="2026-03-12T05:03:19.844024080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:03:19.844127 containerd[1498]: time="2026-03-12T05:03:19.844066371Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 12 05:03:19.844127 containerd[1498]: time="2026-03-12T05:03:19.844091411Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 12 05:03:19.844396 containerd[1498]: time="2026-03-12T05:03:19.844368673Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 12 05:03:19.844470 containerd[1498]: time="2026-03-12T05:03:19.844421459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 12 05:03:19.844586 containerd[1498]: time="2026-03-12T05:03:19.844535218Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:03:19.844586 containerd[1498]: time="2026-03-12T05:03:19.844565049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845432 containerd[1498]: time="2026-03-12T05:03:19.844816101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845432 containerd[1498]: time="2026-03-12T05:03:19.844852887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845432 containerd[1498]: time="2026-03-12T05:03:19.844881466Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845432 containerd[1498]: time="2026-03-12T05:03:19.844903878Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845432 containerd[1498]: time="2026-03-12T05:03:19.845070428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845641 containerd[1498]: time="2026-03-12T05:03:19.845474048Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845641 containerd[1498]: time="2026-03-12T05:03:19.845619030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 12 05:03:19.845705 containerd[1498]: time="2026-03-12T05:03:19.845653214Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 12 05:03:19.846440 containerd[1498]: time="2026-03-12T05:03:19.845818506Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 12 05:03:19.846440 containerd[1498]: time="2026-03-12T05:03:19.845923893Z" level=info msg="metadata content store policy set" policy=shared Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.849943930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850035451Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850062674Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850086571Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850128125Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850328332Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850763281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850937648Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.850962790Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.851002533Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.851028846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.851050253Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.851070501Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851439 containerd[1498]: time="2026-03-12T05:03:19.851092411Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851112940Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851134750Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851153503Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851171910Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851223484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851249746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851282319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851302054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851321172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851352434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851378808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.851917 containerd[1498]: time="2026-03-12T05:03:19.851395824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.852426 containerd[1498]: time="2026-03-12T05:03:19.852376440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.852526 containerd[1498]: time="2026-03-12T05:03:19.852503534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.852623 containerd[1498]: time="2026-03-12T05:03:19.852592809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.852720 containerd[1498]: time="2026-03-12T05:03:19.852698201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.852842 containerd[1498]: time="2026-03-12T05:03:19.852804837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.852952 containerd[1498]: time="2026-03-12T05:03:19.852928571Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 12 05:03:19.853082 containerd[1498]: time="2026-03-12T05:03:19.853057826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.853178 containerd[1498]: time="2026-03-12T05:03:19.853155794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.853267 containerd[1498]: time="2026-03-12T05:03:19.853244970Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 12 05:03:19.853469 containerd[1498]: time="2026-03-12T05:03:19.853442506Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 12 05:03:19.853716 containerd[1498]: time="2026-03-12T05:03:19.853680639Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 12 05:03:19.853848 containerd[1498]: time="2026-03-12T05:03:19.853823202Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 12 05:03:19.853942 containerd[1498]: time="2026-03-12T05:03:19.853917666Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 12 05:03:19.854041 containerd[1498]: time="2026-03-12T05:03:19.854018242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.854143 containerd[1498]: time="2026-03-12T05:03:19.854119890Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 12 05:03:19.854260 containerd[1498]: time="2026-03-12T05:03:19.854236290Z" level=info msg="NRI interface is disabled by configuration." Mar 12 05:03:19.854375 containerd[1498]: time="2026-03-12T05:03:19.854352483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 12 05:03:19.856445 containerd[1498]: time="2026-03-12T05:03:19.856057925Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 12 05:03:19.856445 containerd[1498]: time="2026-03-12T05:03:19.856158932Z" level=info msg="Connect containerd service" Mar 12 05:03:19.856445 containerd[1498]: time="2026-03-12T05:03:19.856226215Z" level=info msg="using legacy CRI server" Mar 12 05:03:19.856445 containerd[1498]: time="2026-03-12T05:03:19.856244888Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 12 05:03:19.858414 containerd[1498]: time="2026-03-12T05:03:19.857621938Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 12 05:03:19.858828 containerd[1498]: time="2026-03-12T05:03:19.858780846Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 05:03:19.860130 containerd[1498]: time="2026-03-12T05:03:19.860070927Z" level=info msg="Start subscribing containerd event" Mar 12 05:03:19.860189 containerd[1498]: time="2026-03-12T05:03:19.860148900Z" level=info msg="Start recovering state" Mar 12 05:03:19.860337 containerd[1498]: time="2026-03-12T05:03:19.860263979Z" level=info msg="Start event monitor" Mar 12 05:03:19.860337 containerd[1498]: time="2026-03-12T05:03:19.860308752Z" level=info msg="Start snapshots syncer" Mar 12 05:03:19.860337 containerd[1498]: time="2026-03-12T05:03:19.860328595Z" level=info msg="Start cni network conf syncer for default" Mar 12 05:03:19.860502 containerd[1498]: time="2026-03-12T05:03:19.860341245Z" level=info msg="Start streaming server" Mar 12 05:03:19.860723 containerd[1498]: time="2026-03-12T05:03:19.860681883Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 12 05:03:19.860880 containerd[1498]: time="2026-03-12T05:03:19.860855810Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 12 05:03:19.861154 containerd[1498]: time="2026-03-12T05:03:19.861128170Z" level=info msg="containerd successfully booted in 0.075921s" Mar 12 05:03:19.861260 systemd[1]: Started containerd.service - containerd container runtime. Mar 12 05:03:20.203591 systemd-networkd[1429]: eth0: Gained IPv6LL Mar 12 05:03:20.212242 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 12 05:03:20.213996 systemd[1]: Reached target network-online.target - Network is Online. Mar 12 05:03:20.222789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:03:21.212192 systemd-resolved[1383]: Clock change detected. Flushing caches. Mar 12 05:03:21.212873 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 12 05:03:21.213632 systemd-timesyncd[1404]: Contacted time server 62.232.9.188:123 (1.flatcar.pool.ntp.org). Mar 12 05:03:21.213711 systemd-timesyncd[1404]: Initial clock synchronization to Thu 2026-03-12 05:03:21.212132 UTC. Mar 12 05:03:21.271505 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 12 05:03:21.350952 tar[1491]: linux-amd64/README.md Mar 12 05:03:21.366794 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 12 05:03:21.608041 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 12 05:03:21.641772 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 12 05:03:21.655967 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 12 05:03:21.676245 systemd[1]: issuegen.service: Deactivated successfully. Mar 12 05:03:21.676960 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 12 05:03:21.689626 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 12 05:03:21.706145 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 12 05:03:21.715069 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 12 05:03:21.723415 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 12 05:03:21.725864 systemd[1]: Reached target getty.target - Login Prompts. Mar 12 05:03:22.114717 systemd-networkd[1429]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88c5:24:19ff:fee6:2316/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88c5:24:19ff:fee6:2316/64 assigned by NDisc. Mar 12 05:03:22.114728 systemd-networkd[1429]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 12 05:03:22.389296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:03:22.402251 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:03:23.022492 kubelet[1595]: E0312 05:03:23.022332 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:03:23.024810 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:03:23.025074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:03:23.025952 systemd[1]: kubelet.service: Consumed 1.040s CPU time. Mar 12 05:03:25.302879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 12 05:03:25.321161 systemd[1]: Started sshd@0-10.230.35.22:22-20.161.92.111:45102.service - OpenSSH per-connection server daemon (20.161.92.111:45102). Mar 12 05:03:25.910218 sshd[1606]: Accepted publickey for core from 20.161.92.111 port 45102 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:25.917959 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:25.939976 systemd-logind[1479]: New session 1 of user core. Mar 12 05:03:25.944413 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 12 05:03:25.952903 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 12 05:03:25.987352 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 12 05:03:25.996957 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 12 05:03:26.020736 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 12 05:03:26.175164 systemd[1610]: Queued start job for default target default.target. Mar 12 05:03:26.187031 systemd[1610]: Created slice app.slice - User Application Slice. Mar 12 05:03:26.187075 systemd[1610]: Reached target paths.target - Paths. Mar 12 05:03:26.187100 systemd[1610]: Reached target timers.target - Timers. Mar 12 05:03:26.189525 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 12 05:03:26.215852 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 12 05:03:26.216497 systemd[1610]: Reached target sockets.target - Sockets. Mar 12 05:03:26.216704 systemd[1610]: Reached target basic.target - Basic System. Mar 12 05:03:26.217014 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 12 05:03:26.218646 systemd[1610]: Reached target default.target - Main User Target. Mar 12 05:03:26.218865 systemd[1610]: Startup finished in 187ms. Mar 12 05:03:26.227776 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 12 05:03:26.662559 systemd[1]: Started sshd@1-10.230.35.22:22-20.161.92.111:45118.service - OpenSSH per-connection server daemon (20.161.92.111:45118). Mar 12 05:03:26.794184 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 12 05:03:26.797143 login[1587]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 12 05:03:26.805712 systemd-logind[1479]: New session 2 of user core. Mar 12 05:03:26.812296 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 12 05:03:26.816953 systemd-logind[1479]: New session 3 of user core. Mar 12 05:03:26.824806 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 12 05:03:27.126906 coreos-metadata[1469]: Mar 12 05:03:27.126 WARN failed to locate config-drive, using the metadata service API instead Mar 12 05:03:27.154227 coreos-metadata[1469]: Mar 12 05:03:27.154 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 12 05:03:27.159841 coreos-metadata[1469]: Mar 12 05:03:27.159 INFO Fetch failed with 404: resource not found Mar 12 05:03:27.159841 coreos-metadata[1469]: Mar 12 05:03:27.159 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 12 05:03:27.160535 coreos-metadata[1469]: Mar 12 05:03:27.160 INFO Fetch successful Mar 12 05:03:27.160748 coreos-metadata[1469]: Mar 12 05:03:27.160 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 12 05:03:27.175121 coreos-metadata[1469]: Mar 12 05:03:27.175 INFO Fetch successful Mar 12 05:03:27.175312 coreos-metadata[1469]: Mar 12 05:03:27.175 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 12 05:03:27.192292 coreos-metadata[1469]: Mar 12 05:03:27.192 INFO Fetch successful Mar 12 05:03:27.192627 coreos-metadata[1469]: Mar 12 05:03:27.192 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 12 05:03:27.216219 coreos-metadata[1469]: Mar 12 05:03:27.216 INFO Fetch successful Mar 12 05:03:27.216495 coreos-metadata[1469]: Mar 12 05:03:27.216 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 12 05:03:27.235113 coreos-metadata[1469]: Mar 12 05:03:27.234 INFO Fetch successful Mar 12 05:03:27.243434 sshd[1621]: Accepted publickey for core from 20.161.92.111 port 45118 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:27.246107 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:27.256476 systemd-logind[1479]: New session 4 of user core. Mar 12 05:03:27.267149 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 12 05:03:27.269247 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 12 05:03:27.274717 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 12 05:03:27.657326 sshd[1621]: pam_unix(sshd:session): session closed for user core Mar 12 05:03:27.661656 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Mar 12 05:03:27.663134 systemd[1]: sshd@1-10.230.35.22:22-20.161.92.111:45118.service: Deactivated successfully. Mar 12 05:03:27.666112 systemd[1]: session-4.scope: Deactivated successfully. Mar 12 05:03:27.668701 systemd-logind[1479]: Removed session 4. Mar 12 05:03:27.714974 coreos-metadata[1535]: Mar 12 05:03:27.714 WARN failed to locate config-drive, using the metadata service API instead Mar 12 05:03:27.738096 coreos-metadata[1535]: Mar 12 05:03:27.738 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 12 05:03:27.762956 systemd[1]: Started sshd@2-10.230.35.22:22-20.161.92.111:45124.service - OpenSSH per-connection server daemon (20.161.92.111:45124). Mar 12 05:03:27.764274 coreos-metadata[1535]: Mar 12 05:03:27.764 INFO Fetch successful Mar 12 05:03:27.764578 coreos-metadata[1535]: Mar 12 05:03:27.764 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 12 05:03:27.836704 coreos-metadata[1535]: Mar 12 05:03:27.836 INFO Fetch successful Mar 12 05:03:27.839483 unknown[1535]: wrote ssh authorized keys file for user: core Mar 12 05:03:27.861178 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Mar 12 05:03:27.862109 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 12 05:03:27.865763 systemd[1]: Finished sshkeys.service. Mar 12 05:03:27.869215 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 12 05:03:27.869535 systemd[1]: Startup finished in 1.513s (kernel) + 15.118s (initrd) + 12.053s (userspace) = 28.684s. Mar 12 05:03:28.312536 sshd[1663]: Accepted publickey for core from 20.161.92.111 port 45124 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:28.314188 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:28.321455 systemd-logind[1479]: New session 5 of user core. Mar 12 05:03:28.332711 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 12 05:03:28.708163 sshd[1663]: pam_unix(sshd:session): session closed for user core Mar 12 05:03:28.713601 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Mar 12 05:03:28.714924 systemd[1]: sshd@2-10.230.35.22:22-20.161.92.111:45124.service: Deactivated successfully. Mar 12 05:03:28.718089 systemd[1]: session-5.scope: Deactivated successfully. Mar 12 05:03:28.719881 systemd-logind[1479]: Removed session 5. Mar 12 05:03:29.225867 systemd[1]: Started sshd@3-10.230.35.22:22-80.94.95.116:44296.service - OpenSSH per-connection server daemon (80.94.95.116:44296). Mar 12 05:03:32.140510 sshd[1674]: Connection closed by authenticating user root 80.94.95.116 port 44296 [preauth] Mar 12 05:03:32.143118 systemd[1]: sshd@3-10.230.35.22:22-80.94.95.116:44296.service: Deactivated successfully. Mar 12 05:03:33.108149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 12 05:03:33.114868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:03:33.297658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:03:33.316026 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:03:33.491793 kubelet[1686]: E0312 05:03:33.491551 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:03:33.497670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:03:33.498139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:03:38.815537 systemd[1]: Started sshd@4-10.230.35.22:22-20.161.92.111:43360.service - OpenSSH per-connection server daemon (20.161.92.111:43360). Mar 12 05:03:39.373236 sshd[1695]: Accepted publickey for core from 20.161.92.111 port 43360 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:39.374471 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:39.381487 systemd-logind[1479]: New session 6 of user core. Mar 12 05:03:39.389675 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 12 05:03:39.767856 sshd[1695]: pam_unix(sshd:session): session closed for user core Mar 12 05:03:39.772062 systemd[1]: sshd@4-10.230.35.22:22-20.161.92.111:43360.service: Deactivated successfully. Mar 12 05:03:39.774352 systemd[1]: session-6.scope: Deactivated successfully. Mar 12 05:03:39.776165 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Mar 12 05:03:39.777664 systemd-logind[1479]: Removed session 6. Mar 12 05:03:39.866668 systemd[1]: Started sshd@5-10.230.35.22:22-20.161.92.111:43368.service - OpenSSH per-connection server daemon (20.161.92.111:43368). Mar 12 05:03:40.438193 sshd[1702]: Accepted publickey for core from 20.161.92.111 port 43368 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:40.439201 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:40.446559 systemd-logind[1479]: New session 7 of user core. Mar 12 05:03:40.461803 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 12 05:03:40.826302 sshd[1702]: pam_unix(sshd:session): session closed for user core Mar 12 05:03:40.833050 systemd[1]: sshd@5-10.230.35.22:22-20.161.92.111:43368.service: Deactivated successfully. Mar 12 05:03:40.835869 systemd[1]: session-7.scope: Deactivated successfully. Mar 12 05:03:40.837102 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Mar 12 05:03:40.838887 systemd-logind[1479]: Removed session 7. Mar 12 05:03:40.942487 systemd[1]: Started sshd@6-10.230.35.22:22-20.161.92.111:51128.service - OpenSSH per-connection server daemon (20.161.92.111:51128). Mar 12 05:03:41.514370 sshd[1709]: Accepted publickey for core from 20.161.92.111 port 51128 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:41.515371 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:41.522430 systemd-logind[1479]: New session 8 of user core. Mar 12 05:03:41.530653 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 12 05:03:41.927204 sshd[1709]: pam_unix(sshd:session): session closed for user core Mar 12 05:03:41.932219 systemd[1]: sshd@6-10.230.35.22:22-20.161.92.111:51128.service: Deactivated successfully. Mar 12 05:03:41.934865 systemd[1]: session-8.scope: Deactivated successfully. Mar 12 05:03:41.936925 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Mar 12 05:03:41.938545 systemd-logind[1479]: Removed session 8. Mar 12 05:03:42.034868 systemd[1]: Started sshd@7-10.230.35.22:22-20.161.92.111:51130.service - OpenSSH per-connection server daemon (20.161.92.111:51130). Mar 12 05:03:42.601202 sshd[1716]: Accepted publickey for core from 20.161.92.111 port 51130 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:42.604037 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:42.610511 systemd-logind[1479]: New session 9 of user core. Mar 12 05:03:42.618728 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 12 05:03:42.933622 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 12 05:03:42.934126 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:03:42.950097 sudo[1719]: pam_unix(sudo:session): session closed for user root Mar 12 05:03:43.041609 sshd[1716]: pam_unix(sshd:session): session closed for user core Mar 12 05:03:43.047152 systemd[1]: sshd@7-10.230.35.22:22-20.161.92.111:51130.service: Deactivated successfully. Mar 12 05:03:43.049978 systemd[1]: session-9.scope: Deactivated successfully. Mar 12 05:03:43.051195 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Mar 12 05:03:43.052801 systemd-logind[1479]: Removed session 9. Mar 12 05:03:43.142785 systemd[1]: Started sshd@8-10.230.35.22:22-20.161.92.111:51146.service - OpenSSH per-connection server daemon (20.161.92.111:51146). Mar 12 05:03:43.608036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 12 05:03:43.620078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:03:43.703182 sshd[1724]: Accepted publickey for core from 20.161.92.111 port 51146 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:43.706414 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:43.714813 systemd-logind[1479]: New session 10 of user core. Mar 12 05:03:43.722724 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 12 05:03:43.788240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:03:43.794821 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:03:43.952760 kubelet[1735]: E0312 05:03:43.952407 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:03:43.955915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:03:43.956237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:03:44.014676 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 12 05:03:44.015260 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:03:44.021846 sudo[1744]: pam_unix(sudo:session): session closed for user root Mar 12 05:03:44.030056 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 12 05:03:44.030999 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:03:44.048783 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 12 05:03:44.059981 auditctl[1747]: No rules Mar 12 05:03:44.060847 systemd[1]: audit-rules.service: Deactivated successfully. Mar 12 05:03:44.061119 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 12 05:03:44.069323 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 12 05:03:44.115334 augenrules[1765]: No rules Mar 12 05:03:44.117764 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 12 05:03:44.119528 sudo[1743]: pam_unix(sudo:session): session closed for user root Mar 12 05:03:44.207937 sshd[1724]: pam_unix(sshd:session): session closed for user core Mar 12 05:03:44.213104 systemd[1]: sshd@8-10.230.35.22:22-20.161.92.111:51146.service: Deactivated successfully. Mar 12 05:03:44.215472 systemd[1]: session-10.scope: Deactivated successfully. Mar 12 05:03:44.217806 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Mar 12 05:03:44.219626 systemd-logind[1479]: Removed session 10. Mar 12 05:03:44.308364 systemd[1]: Started sshd@9-10.230.35.22:22-20.161.92.111:51148.service - OpenSSH per-connection server daemon (20.161.92.111:51148). Mar 12 05:03:45.173205 sshd[1773]: Accepted publickey for core from 20.161.92.111 port 51148 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:03:45.175673 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:03:45.182611 systemd-logind[1479]: New session 11 of user core. Mar 12 05:03:45.199945 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 12 05:03:45.484430 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 12 05:03:45.485604 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 12 05:03:45.948807 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 12 05:03:45.949073 (dockerd)[1791]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 12 05:03:46.376916 dockerd[1791]: time="2026-03-12T05:03:46.376783802Z" level=info msg="Starting up" Mar 12 05:03:46.495159 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2919985903-merged.mount: Deactivated successfully. Mar 12 05:03:46.514305 systemd[1]: var-lib-docker-metacopy\x2dcheck1453243730-merged.mount: Deactivated successfully. Mar 12 05:03:46.550319 dockerd[1791]: time="2026-03-12T05:03:46.550262013Z" level=info msg="Loading containers: start." Mar 12 05:03:46.692599 kernel: Initializing XFRM netlink socket Mar 12 05:03:46.817109 systemd-networkd[1429]: docker0: Link UP Mar 12 05:03:46.854428 dockerd[1791]: time="2026-03-12T05:03:46.853137306Z" level=info msg="Loading containers: done." Mar 12 05:03:46.886043 dockerd[1791]: time="2026-03-12T05:03:46.885918231Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 12 05:03:46.886361 dockerd[1791]: time="2026-03-12T05:03:46.886146943Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 12 05:03:46.886904 dockerd[1791]: time="2026-03-12T05:03:46.886855489Z" level=info msg="Daemon has completed initialization" Mar 12 05:03:46.990329 dockerd[1791]: time="2026-03-12T05:03:46.989890470Z" level=info msg="API listen on /run/docker.sock" Mar 12 05:03:46.990650 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 12 05:03:47.490605 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2904371207-merged.mount: Deactivated successfully. Mar 12 05:03:47.782702 containerd[1498]: time="2026-03-12T05:03:47.781527415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 12 05:03:48.679706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2042689265.mount: Deactivated successfully. Mar 12 05:03:50.657521 containerd[1498]: time="2026-03-12T05:03:50.656857895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:50.660532 containerd[1498]: time="2026-03-12T05:03:50.660206106Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074505" Mar 12 05:03:50.664466 containerd[1498]: time="2026-03-12T05:03:50.663076439Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:50.668918 containerd[1498]: time="2026-03-12T05:03:50.668878231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:50.670758 containerd[1498]: time="2026-03-12T05:03:50.670716279Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 2.889092726s" Mar 12 05:03:50.670913 containerd[1498]: time="2026-03-12T05:03:50.670884134Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 12 05:03:50.672571 containerd[1498]: time="2026-03-12T05:03:50.672537544Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 12 05:03:52.135943 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 12 05:03:52.637939 containerd[1498]: time="2026-03-12T05:03:52.636751450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:52.640088 containerd[1498]: time="2026-03-12T05:03:52.639880729Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165831" Mar 12 05:03:52.641741 containerd[1498]: time="2026-03-12T05:03:52.641037386Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:52.646528 containerd[1498]: time="2026-03-12T05:03:52.646483973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:52.648304 containerd[1498]: time="2026-03-12T05:03:52.648256471Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 1.97555664s" Mar 12 05:03:52.648483 containerd[1498]: time="2026-03-12T05:03:52.648453541Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 12 05:03:52.649796 containerd[1498]: time="2026-03-12T05:03:52.649755721Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 12 05:03:54.108827 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 12 05:03:54.121305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:03:54.362689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:03:54.374972 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:03:54.454161 containerd[1498]: time="2026-03-12T05:03:54.454083463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:54.459901 containerd[1498]: time="2026-03-12T05:03:54.459783297Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729832" Mar 12 05:03:54.461468 containerd[1498]: time="2026-03-12T05:03:54.461000569Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:54.467845 containerd[1498]: time="2026-03-12T05:03:54.467783233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:54.469830 containerd[1498]: time="2026-03-12T05:03:54.469382218Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 1.819467661s" Mar 12 05:03:54.469830 containerd[1498]: time="2026-03-12T05:03:54.469432578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 12 05:03:54.471120 containerd[1498]: time="2026-03-12T05:03:54.470192268Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 12 05:03:54.480463 kubelet[2010]: E0312 05:03:54.480375 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:03:54.483751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:03:54.484300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:03:56.292479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257254318.mount: Deactivated successfully. Mar 12 05:03:56.793803 containerd[1498]: time="2026-03-12T05:03:56.793728879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:56.806944 containerd[1498]: time="2026-03-12T05:03:56.806848124Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861778" Mar 12 05:03:56.814461 containerd[1498]: time="2026-03-12T05:03:56.812911219Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:56.832053 containerd[1498]: time="2026-03-12T05:03:56.831996342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:56.833264 containerd[1498]: time="2026-03-12T05:03:56.833223188Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 2.362975957s" Mar 12 05:03:56.833408 containerd[1498]: time="2026-03-12T05:03:56.833378912Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 12 05:03:56.834311 containerd[1498]: time="2026-03-12T05:03:56.834257222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 12 05:03:57.442700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649922059.mount: Deactivated successfully. Mar 12 05:03:58.945475 containerd[1498]: time="2026-03-12T05:03:58.943648764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:58.947457 containerd[1498]: time="2026-03-12T05:03:58.947395049Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Mar 12 05:03:58.949087 containerd[1498]: time="2026-03-12T05:03:58.949052860Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:58.953564 containerd[1498]: time="2026-03-12T05:03:58.953528095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:58.955461 containerd[1498]: time="2026-03-12T05:03:58.955411800Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.121084916s" Mar 12 05:03:58.955603 containerd[1498]: time="2026-03-12T05:03:58.955575607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 12 05:03:58.956547 containerd[1498]: time="2026-03-12T05:03:58.956462584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 12 05:03:59.500343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101350467.mount: Deactivated successfully. Mar 12 05:03:59.506604 containerd[1498]: time="2026-03-12T05:03:59.506545862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:59.508492 containerd[1498]: time="2026-03-12T05:03:59.508400058Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Mar 12 05:03:59.509620 containerd[1498]: time="2026-03-12T05:03:59.509556761Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:59.513058 containerd[1498]: time="2026-03-12T05:03:59.512468280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:03:59.515245 containerd[1498]: time="2026-03-12T05:03:59.515139538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 558.441462ms" Mar 12 05:03:59.515245 containerd[1498]: time="2026-03-12T05:03:59.515237460Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 12 05:03:59.516046 containerd[1498]: time="2026-03-12T05:03:59.515992600Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 12 05:04:00.163325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249533309.mount: Deactivated successfully. Mar 12 05:04:02.308490 containerd[1498]: time="2026-03-12T05:04:02.307077708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:04:02.317119 containerd[1498]: time="2026-03-12T05:04:02.317036734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860682" Mar 12 05:04:02.318217 containerd[1498]: time="2026-03-12T05:04:02.318180500Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:04:02.326813 containerd[1498]: time="2026-03-12T05:04:02.326762674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:04:02.330309 containerd[1498]: time="2026-03-12T05:04:02.330265030Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 2.814223271s" Mar 12 05:04:02.330546 containerd[1498]: time="2026-03-12T05:04:02.330515380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 12 05:04:04.609144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 12 05:04:04.620791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:04:04.818741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:04:04.822926 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 12 05:04:05.028008 kubelet[2171]: E0312 05:04:05.026650 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 12 05:04:05.031334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 12 05:04:05.031850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 12 05:04:05.194072 update_engine[1480]: I20260312 05:04:05.192687 1480 update_attempter.cc:509] Updating boot flags... Mar 12 05:04:05.276036 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2186) Mar 12 05:04:06.054317 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:04:06.065801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:04:06.114464 systemd[1]: Reloading requested from client PID 2199 ('systemctl') (unit session-11.scope)... Mar 12 05:04:06.114529 systemd[1]: Reloading... Mar 12 05:04:06.281531 zram_generator::config[2241]: No configuration found. Mar 12 05:04:06.455538 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:04:06.567630 systemd[1]: Reloading finished in 452 ms. Mar 12 05:04:06.644202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:04:06.650001 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:04:06.653251 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 05:04:06.653612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:04:06.660773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:04:06.826589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:04:06.840269 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 05:04:07.055932 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 05:04:07.055932 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 05:04:07.056669 kubelet[2307]: I0312 05:04:07.055950 2307 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 05:04:07.263484 kubelet[2307]: I0312 05:04:07.263340 2307 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 05:04:07.263484 kubelet[2307]: I0312 05:04:07.263395 2307 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 05:04:07.263484 kubelet[2307]: I0312 05:04:07.263494 2307 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 05:04:07.263843 kubelet[2307]: I0312 05:04:07.263521 2307 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 05:04:07.263887 kubelet[2307]: I0312 05:04:07.263845 2307 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 05:04:07.280491 kubelet[2307]: E0312 05:04:07.280368 2307 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.35.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 05:04:07.283746 kubelet[2307]: I0312 05:04:07.283147 2307 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 05:04:07.290670 kubelet[2307]: E0312 05:04:07.290603 2307 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 05:04:07.290776 kubelet[2307]: I0312 05:04:07.290701 2307 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 05:04:07.297726 kubelet[2307]: I0312 05:04:07.297688 2307 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 05:04:07.299541 kubelet[2307]: I0312 05:04:07.299430 2307 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 05:04:07.299817 kubelet[2307]: I0312 05:04:07.299536 2307 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rxxiv.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 05:04:07.300089 kubelet[2307]: I0312 05:04:07.299837 2307 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 05:04:07.300089 kubelet[2307]: I0312 05:04:07.299854 2307 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 05:04:07.300089 kubelet[2307]: I0312 05:04:07.300081 2307 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 05:04:07.301910 kubelet[2307]: I0312 05:04:07.301882 2307 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:04:07.302245 kubelet[2307]: I0312 05:04:07.302221 2307 kubelet.go:475] "Attempting to sync node with API server" Mar 12 05:04:07.302344 kubelet[2307]: I0312 05:04:07.302255 2307 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 05:04:07.302344 kubelet[2307]: I0312 05:04:07.302316 2307 kubelet.go:387] "Adding apiserver pod source" Mar 12 05:04:07.302463 kubelet[2307]: I0312 05:04:07.302360 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 05:04:07.307462 kubelet[2307]: E0312 05:04:07.305338 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.35.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 05:04:07.307462 kubelet[2307]: E0312 05:04:07.305588 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.35.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rxxiv.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 05:04:07.307462 kubelet[2307]: I0312 05:04:07.305805 2307 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 05:04:07.307462 kubelet[2307]: I0312 05:04:07.306660 2307 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 05:04:07.307462 kubelet[2307]: I0312 05:04:07.306701 2307 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 05:04:07.307462 kubelet[2307]: W0312 05:04:07.306818 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 12 05:04:07.312291 kubelet[2307]: I0312 05:04:07.312261 2307 server.go:1262] "Started kubelet" Mar 12 05:04:07.314095 kubelet[2307]: I0312 05:04:07.314051 2307 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 05:04:07.315948 kubelet[2307]: I0312 05:04:07.315886 2307 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 05:04:07.316045 kubelet[2307]: I0312 05:04:07.316004 2307 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 05:04:07.316578 kubelet[2307]: I0312 05:04:07.316548 2307 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 05:04:07.316743 kubelet[2307]: I0312 05:04:07.316720 2307 server.go:310] "Adding debug handlers to kubelet server" Mar 12 05:04:07.320906 kubelet[2307]: I0312 05:04:07.320881 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 05:04:07.323736 kubelet[2307]: E0312 05:04:07.321339 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.35.22:6443/api/v1/namespaces/default/events\": dial tcp 10.230.35.22:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-rxxiv.gb1.brightbox.com.189bff8a32241369 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rxxiv.gb1.brightbox.com,UID:srv-rxxiv.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rxxiv.gb1.brightbox.com,},FirstTimestamp:2026-03-12 05:04:07.312216937 +0000 UTC m=+0.314048409,LastTimestamp:2026-03-12 05:04:07.312216937 +0000 UTC m=+0.314048409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rxxiv.gb1.brightbox.com,}" Mar 12 05:04:07.326765 kubelet[2307]: E0312 05:04:07.326728 2307 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 05:04:07.327059 kubelet[2307]: I0312 05:04:07.327020 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 05:04:07.330751 kubelet[2307]: I0312 05:04:07.328879 2307 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 05:04:07.330751 kubelet[2307]: E0312 05:04:07.329518 2307 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" Mar 12 05:04:07.331073 kubelet[2307]: I0312 05:04:07.331047 2307 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 05:04:07.331691 kubelet[2307]: I0312 05:04:07.331664 2307 reconciler.go:29] "Reconciler: start to sync state" Mar 12 05:04:07.334744 kubelet[2307]: E0312 05:04:07.333974 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.35.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 05:04:07.334744 kubelet[2307]: E0312 05:04:07.334203 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxxiv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.22:6443: connect: connection refused" interval="200ms" Mar 12 05:04:07.336155 kubelet[2307]: I0312 05:04:07.336101 2307 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 05:04:07.342404 kubelet[2307]: I0312 05:04:07.342378 2307 factory.go:223] Registration of the containerd container factory successfully Mar 12 05:04:07.342404 kubelet[2307]: I0312 05:04:07.342403 2307 factory.go:223] Registration of the systemd container factory successfully Mar 12 05:04:07.476718 kubelet[2307]: E0312 05:04:07.476628 2307 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" Mar 12 05:04:07.477481 kubelet[2307]: I0312 05:04:07.477223 2307 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 05:04:07.477481 kubelet[2307]: I0312 05:04:07.477252 2307 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 05:04:07.477481 kubelet[2307]: I0312 05:04:07.477283 2307 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:04:07.507994 kubelet[2307]: I0312 05:04:07.507636 2307 policy_none.go:49] "None policy: Start" Mar 12 05:04:07.507994 kubelet[2307]: I0312 05:04:07.507698 2307 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 05:04:07.507994 kubelet[2307]: I0312 05:04:07.507737 2307 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 05:04:07.522503 kubelet[2307]: I0312 05:04:07.518788 2307 policy_none.go:47] "Start" Mar 12 05:04:07.522503 kubelet[2307]: I0312 05:04:07.519286 2307 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 05:04:07.522503 kubelet[2307]: I0312 05:04:07.521392 2307 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 05:04:07.522503 kubelet[2307]: I0312 05:04:07.521435 2307 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 05:04:07.522503 kubelet[2307]: I0312 05:04:07.521538 2307 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 05:04:07.522503 kubelet[2307]: E0312 05:04:07.521609 2307 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 05:04:07.530680 kubelet[2307]: E0312 05:04:07.529943 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.35.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 05:04:07.535929 kubelet[2307]: E0312 05:04:07.535872 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxxiv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.22:6443: connect: connection refused" interval="400ms" Mar 12 05:04:07.538181 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 12 05:04:07.555899 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 12 05:04:07.573077 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 12 05:04:07.576247 kubelet[2307]: E0312 05:04:07.575132 2307 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 05:04:07.576247 kubelet[2307]: I0312 05:04:07.575517 2307 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 05:04:07.576247 kubelet[2307]: I0312 05:04:07.575552 2307 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 05:04:07.576247 kubelet[2307]: I0312 05:04:07.576036 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 05:04:07.578965 kubelet[2307]: E0312 05:04:07.578939 2307 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 05:04:07.579138 kubelet[2307]: E0312 05:04:07.579115 2307 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-rxxiv.gb1.brightbox.com\" not found" Mar 12 05:04:07.678623 kubelet[2307]: I0312 05:04:07.678573 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71cc9ac79e0f68061ef30de756972704-ca-certs\") pod \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" (UID: \"71cc9ac79e0f68061ef30de756972704\") " pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.678827 kubelet[2307]: I0312 05:04:07.678629 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71cc9ac79e0f68061ef30de756972704-k8s-certs\") pod \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" (UID: \"71cc9ac79e0f68061ef30de756972704\") " pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.678827 kubelet[2307]: I0312 05:04:07.678664 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71cc9ac79e0f68061ef30de756972704-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" (UID: \"71cc9ac79e0f68061ef30de756972704\") " pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.679680 kubelet[2307]: I0312 05:04:07.679653 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.680468 kubelet[2307]: E0312 05:04:07.680415 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.35.22:6443/api/v1/nodes\": dial tcp 10.230.35.22:6443: connect: connection refused" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.711644 systemd[1]: Created slice kubepods-burstable-pod71cc9ac79e0f68061ef30de756972704.slice - libcontainer container kubepods-burstable-pod71cc9ac79e0f68061ef30de756972704.slice. Mar 12 05:04:07.723669 kubelet[2307]: E0312 05:04:07.723622 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.728016 systemd[1]: Created slice kubepods-burstable-pod5cb81e4903389095a2895c42b1be3835.slice - libcontainer container kubepods-burstable-pod5cb81e4903389095a2895c42b1be3835.slice. Mar 12 05:04:07.732565 kubelet[2307]: E0312 05:04:07.732208 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.737263 systemd[1]: Created slice kubepods-burstable-pod685f344dc3a2fe2146b38ebec6fc17fc.slice - libcontainer container kubepods-burstable-pod685f344dc3a2fe2146b38ebec6fc17fc.slice. Mar 12 05:04:07.740031 kubelet[2307]: E0312 05:04:07.739759 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.779531 kubelet[2307]: I0312 05:04:07.779370 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-ca-certs\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.779531 kubelet[2307]: I0312 05:04:07.779456 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-flexvolume-dir\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.779531 kubelet[2307]: I0312 05:04:07.779493 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-k8s-certs\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.779703 kubelet[2307]: I0312 05:04:07.779553 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/685f344dc3a2fe2146b38ebec6fc17fc-kubeconfig\") pod \"kube-scheduler-srv-rxxiv.gb1.brightbox.com\" (UID: \"685f344dc3a2fe2146b38ebec6fc17fc\") " pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.779703 kubelet[2307]: I0312 05:04:07.779661 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-kubeconfig\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.779816 kubelet[2307]: I0312 05:04:07.779715 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.883788 kubelet[2307]: I0312 05:04:07.883735 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.884266 kubelet[2307]: E0312 05:04:07.884222 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.35.22:6443/api/v1/nodes\": dial tcp 10.230.35.22:6443: connect: connection refused" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:07.937532 kubelet[2307]: E0312 05:04:07.937409 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxxiv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.22:6443: connect: connection refused" interval="800ms" Mar 12 05:04:08.029674 containerd[1498]: time="2026-03-12T05:04:08.029497740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rxxiv.gb1.brightbox.com,Uid:71cc9ac79e0f68061ef30de756972704,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:08.035770 containerd[1498]: time="2026-03-12T05:04:08.035246676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rxxiv.gb1.brightbox.com,Uid:5cb81e4903389095a2895c42b1be3835,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:08.042904 containerd[1498]: time="2026-03-12T05:04:08.042864748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rxxiv.gb1.brightbox.com,Uid:685f344dc3a2fe2146b38ebec6fc17fc,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:08.260684 kubelet[2307]: E0312 05:04:08.260620 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.35.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 12 05:04:08.288594 kubelet[2307]: I0312 05:04:08.287703 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:08.288594 kubelet[2307]: E0312 05:04:08.288146 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.35.22:6443/api/v1/nodes\": dial tcp 10.230.35.22:6443: connect: connection refused" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:08.572778 kubelet[2307]: E0312 05:04:08.572580 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.35.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 12 05:04:08.738553 kubelet[2307]: E0312 05:04:08.738418 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rxxiv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.22:6443: connect: connection refused" interval="1.6s" Mar 12 05:04:08.796810 kubelet[2307]: E0312 05:04:08.796740 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.35.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rxxiv.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 12 05:04:08.988898 kubelet[2307]: E0312 05:04:08.988830 2307 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.35.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 12 05:04:09.070932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188899441.mount: Deactivated successfully. Mar 12 05:04:09.079509 containerd[1498]: time="2026-03-12T05:04:09.078796555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:04:09.083672 containerd[1498]: time="2026-03-12T05:04:09.083618493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 12 05:04:09.084451 containerd[1498]: time="2026-03-12T05:04:09.084331117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:04:09.085576 containerd[1498]: time="2026-03-12T05:04:09.085528598Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:04:09.087728 containerd[1498]: time="2026-03-12T05:04:09.086708738Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:04:09.088618 containerd[1498]: time="2026-03-12T05:04:09.088561325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 05:04:09.090585 containerd[1498]: time="2026-03-12T05:04:09.089226977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 12 05:04:09.092728 containerd[1498]: time="2026-03-12T05:04:09.092675995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 12 05:04:09.094542 kubelet[2307]: I0312 05:04:09.093918 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:09.094542 kubelet[2307]: E0312 05:04:09.094381 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.35.22:6443/api/v1/nodes\": dial tcp 10.230.35.22:6443: connect: connection refused" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:09.096062 containerd[1498]: time="2026-03-12T05:04:09.096024474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.060702732s" Mar 12 05:04:09.099173 containerd[1498]: time="2026-03-12T05:04:09.099112934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.0694605s" Mar 12 05:04:09.100933 containerd[1498]: time="2026-03-12T05:04:09.100881294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.057936042s" Mar 12 05:04:09.304237 containerd[1498]: time="2026-03-12T05:04:09.303917931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:09.304237 containerd[1498]: time="2026-03-12T05:04:09.304037991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:09.304658 containerd[1498]: time="2026-03-12T05:04:09.304592052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:09.306358 containerd[1498]: time="2026-03-12T05:04:09.305691787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:09.306358 containerd[1498]: time="2026-03-12T05:04:09.305772314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:09.306358 containerd[1498]: time="2026-03-12T05:04:09.305792296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:09.306358 containerd[1498]: time="2026-03-12T05:04:09.306012697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:09.306358 containerd[1498]: time="2026-03-12T05:04:09.305615717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:09.311108 containerd[1498]: time="2026-03-12T05:04:09.310256795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:09.311108 containerd[1498]: time="2026-03-12T05:04:09.310328186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:09.311108 containerd[1498]: time="2026-03-12T05:04:09.310352675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:09.311108 containerd[1498]: time="2026-03-12T05:04:09.310489382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:09.347647 kubelet[2307]: E0312 05:04:09.347556 2307 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.35.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.35.22:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 12 05:04:09.363370 systemd[1]: Started cri-containerd-c7a941306ab777e62045dc387b253f654852c4a60becb34c6fe06abe21feb801.scope - libcontainer container c7a941306ab777e62045dc387b253f654852c4a60becb34c6fe06abe21feb801. Mar 12 05:04:09.375039 systemd[1]: Started cri-containerd-e7f5840a6419419b2d07de039bbfc4327308af0a34d71377e4809bd958ac3205.scope - libcontainer container e7f5840a6419419b2d07de039bbfc4327308af0a34d71377e4809bd958ac3205. Mar 12 05:04:09.387971 systemd[1]: Started cri-containerd-1a12af3e6d246cc6c664806858ea747312300b2b3f0cb2caae7f181afb10df55.scope - libcontainer container 1a12af3e6d246cc6c664806858ea747312300b2b3f0cb2caae7f181afb10df55. Mar 12 05:04:09.494057 containerd[1498]: time="2026-03-12T05:04:09.491927128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rxxiv.gb1.brightbox.com,Uid:5cb81e4903389095a2895c42b1be3835,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7a941306ab777e62045dc387b253f654852c4a60becb34c6fe06abe21feb801\"" Mar 12 05:04:09.498718 containerd[1498]: time="2026-03-12T05:04:09.498665621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rxxiv.gb1.brightbox.com,Uid:71cc9ac79e0f68061ef30de756972704,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a12af3e6d246cc6c664806858ea747312300b2b3f0cb2caae7f181afb10df55\"" Mar 12 05:04:09.511766 containerd[1498]: time="2026-03-12T05:04:09.511561754Z" level=info msg="CreateContainer within sandbox \"c7a941306ab777e62045dc387b253f654852c4a60becb34c6fe06abe21feb801\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 12 05:04:09.513559 containerd[1498]: time="2026-03-12T05:04:09.513500005Z" level=info msg="CreateContainer within sandbox \"1a12af3e6d246cc6c664806858ea747312300b2b3f0cb2caae7f181afb10df55\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 12 05:04:09.531828 containerd[1498]: time="2026-03-12T05:04:09.531695876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rxxiv.gb1.brightbox.com,Uid:685f344dc3a2fe2146b38ebec6fc17fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7f5840a6419419b2d07de039bbfc4327308af0a34d71377e4809bd958ac3205\"" Mar 12 05:04:09.541635 containerd[1498]: time="2026-03-12T05:04:09.541498866Z" level=info msg="CreateContainer within sandbox \"e7f5840a6419419b2d07de039bbfc4327308af0a34d71377e4809bd958ac3205\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 12 05:04:09.546414 containerd[1498]: time="2026-03-12T05:04:09.546371301Z" level=info msg="CreateContainer within sandbox \"1a12af3e6d246cc6c664806858ea747312300b2b3f0cb2caae7f181afb10df55\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"65333505ea9d7ea655598ee5903269dfb482417fff240598f788bdf30912b90f\"" Mar 12 05:04:09.547677 containerd[1498]: time="2026-03-12T05:04:09.547644407Z" level=info msg="StartContainer for \"65333505ea9d7ea655598ee5903269dfb482417fff240598f788bdf30912b90f\"" Mar 12 05:04:09.549843 containerd[1498]: time="2026-03-12T05:04:09.549784504Z" level=info msg="CreateContainer within sandbox \"c7a941306ab777e62045dc387b253f654852c4a60becb34c6fe06abe21feb801\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3fcd94213cec7cad1ec3e56514f88bf208e1518bc64363c3b1505a966edb6d87\"" Mar 12 05:04:09.552464 containerd[1498]: time="2026-03-12T05:04:09.551354418Z" level=info msg="StartContainer for \"3fcd94213cec7cad1ec3e56514f88bf208e1518bc64363c3b1505a966edb6d87\"" Mar 12 05:04:09.599704 systemd[1]: Started cri-containerd-65333505ea9d7ea655598ee5903269dfb482417fff240598f788bdf30912b90f.scope - libcontainer container 65333505ea9d7ea655598ee5903269dfb482417fff240598f788bdf30912b90f. Mar 12 05:04:09.606367 containerd[1498]: time="2026-03-12T05:04:09.606301212Z" level=info msg="CreateContainer within sandbox \"e7f5840a6419419b2d07de039bbfc4327308af0a34d71377e4809bd958ac3205\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"47af06ed2d72163ea025dd90b6a466406978b95dae531e56299f80c5500e4e90\"" Mar 12 05:04:09.608513 containerd[1498]: time="2026-03-12T05:04:09.607932892Z" level=info msg="StartContainer for \"47af06ed2d72163ea025dd90b6a466406978b95dae531e56299f80c5500e4e90\"" Mar 12 05:04:09.611864 systemd[1]: Started cri-containerd-3fcd94213cec7cad1ec3e56514f88bf208e1518bc64363c3b1505a966edb6d87.scope - libcontainer container 3fcd94213cec7cad1ec3e56514f88bf208e1518bc64363c3b1505a966edb6d87. Mar 12 05:04:09.685695 systemd[1]: Started cri-containerd-47af06ed2d72163ea025dd90b6a466406978b95dae531e56299f80c5500e4e90.scope - libcontainer container 47af06ed2d72163ea025dd90b6a466406978b95dae531e56299f80c5500e4e90. Mar 12 05:04:09.728262 containerd[1498]: time="2026-03-12T05:04:09.725848505Z" level=info msg="StartContainer for \"65333505ea9d7ea655598ee5903269dfb482417fff240598f788bdf30912b90f\" returns successfully" Mar 12 05:04:09.746529 containerd[1498]: time="2026-03-12T05:04:09.745232316Z" level=info msg="StartContainer for \"3fcd94213cec7cad1ec3e56514f88bf208e1518bc64363c3b1505a966edb6d87\" returns successfully" Mar 12 05:04:09.805807 containerd[1498]: time="2026-03-12T05:04:09.804874228Z" level=info msg="StartContainer for \"47af06ed2d72163ea025dd90b6a466406978b95dae531e56299f80c5500e4e90\" returns successfully" Mar 12 05:04:10.570951 kubelet[2307]: E0312 05:04:10.566922 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:10.576246 kubelet[2307]: E0312 05:04:10.576050 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:10.581999 kubelet[2307]: E0312 05:04:10.581968 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:10.697875 kubelet[2307]: I0312 05:04:10.697822 2307 kubelet_node_status.go:75] "Attempting to register node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:11.586034 kubelet[2307]: E0312 05:04:11.585334 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:11.586034 kubelet[2307]: E0312 05:04:11.585485 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:11.586743 kubelet[2307]: E0312 05:04:11.586582 2307 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.307684 kubelet[2307]: I0312 05:04:12.307622 2307 apiserver.go:52] "Watching apiserver" Mar 12 05:04:12.331718 kubelet[2307]: I0312 05:04:12.331666 2307 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 05:04:12.339672 kubelet[2307]: E0312 05:04:12.339597 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-rxxiv.gb1.brightbox.com\" not found" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.392963 kubelet[2307]: E0312 05:04:12.392789 2307 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-rxxiv.gb1.brightbox.com.189bff8a32241369 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rxxiv.gb1.brightbox.com,UID:srv-rxxiv.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rxxiv.gb1.brightbox.com,},FirstTimestamp:2026-03-12 05:04:07.312216937 +0000 UTC m=+0.314048409,LastTimestamp:2026-03-12 05:04:07.312216937 +0000 UTC m=+0.314048409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rxxiv.gb1.brightbox.com,}" Mar 12 05:04:12.438569 kubelet[2307]: I0312 05:04:12.438499 2307 kubelet_node_status.go:78] "Successfully registered node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.530813 kubelet[2307]: I0312 05:04:12.530351 2307 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.542271 kubelet[2307]: E0312 05:04:12.542225 2307 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.542809 kubelet[2307]: I0312 05:04:12.542567 2307 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.545810 kubelet[2307]: E0312 05:04:12.545596 2307 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.545810 kubelet[2307]: I0312 05:04:12.545630 2307 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.548453 kubelet[2307]: E0312 05:04:12.548398 2307 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-rxxiv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.586255 kubelet[2307]: I0312 05:04:12.584907 2307 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.586255 kubelet[2307]: I0312 05:04:12.585374 2307 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.596453 kubelet[2307]: E0312 05:04:12.596387 2307 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:12.598692 kubelet[2307]: E0312 05:04:12.598471 2307 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-rxxiv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:13.589073 kubelet[2307]: I0312 05:04:13.588983 2307 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:13.596711 kubelet[2307]: I0312 05:04:13.596286 2307 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:13.937813 kubelet[2307]: I0312 05:04:13.937719 2307 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:13.947383 kubelet[2307]: I0312 05:04:13.946653 2307 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:14.491170 systemd[1]: Reloading requested from client PID 2595 ('systemctl') (unit session-11.scope)... Mar 12 05:04:14.491216 systemd[1]: Reloading... Mar 12 05:04:14.604519 zram_generator::config[2630]: No configuration found. Mar 12 05:04:14.801891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 12 05:04:14.937384 systemd[1]: Reloading finished in 445 ms. Mar 12 05:04:15.002137 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:04:15.018881 systemd[1]: kubelet.service: Deactivated successfully. Mar 12 05:04:15.019641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:04:15.019846 systemd[1]: kubelet.service: Consumed 1.056s CPU time, 122.7M memory peak, 0B memory swap peak. Mar 12 05:04:15.039140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 12 05:04:15.214493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 12 05:04:15.227102 (kubelet)[2698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 12 05:04:15.376682 kubelet[2698]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 12 05:04:15.376682 kubelet[2698]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 12 05:04:15.378472 kubelet[2698]: I0312 05:04:15.377342 2698 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 12 05:04:15.387361 kubelet[2698]: I0312 05:04:15.387239 2698 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 12 05:04:15.387361 kubelet[2698]: I0312 05:04:15.387280 2698 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 12 05:04:15.387361 kubelet[2698]: I0312 05:04:15.387315 2698 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 12 05:04:15.387361 kubelet[2698]: I0312 05:04:15.387326 2698 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 12 05:04:15.387884 kubelet[2698]: I0312 05:04:15.387808 2698 server.go:956] "Client rotation is on, will bootstrap in background" Mar 12 05:04:15.389842 kubelet[2698]: I0312 05:04:15.389807 2698 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 12 05:04:15.395476 kubelet[2698]: I0312 05:04:15.395405 2698 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 12 05:04:15.402202 kubelet[2698]: E0312 05:04:15.402132 2698 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 12 05:04:15.402202 kubelet[2698]: I0312 05:04:15.402214 2698 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 12 05:04:15.410474 kubelet[2698]: I0312 05:04:15.409906 2698 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 12 05:04:15.410474 kubelet[2698]: I0312 05:04:15.410322 2698 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 12 05:04:15.410735 kubelet[2698]: I0312 05:04:15.410379 2698 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rxxiv.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 12 05:04:15.410935 kubelet[2698]: I0312 05:04:15.410739 2698 topology_manager.go:138] "Creating topology manager with none policy" Mar 12 05:04:15.410935 kubelet[2698]: I0312 05:04:15.410756 2698 container_manager_linux.go:306] "Creating device plugin manager" Mar 12 05:04:15.410935 kubelet[2698]: I0312 05:04:15.410806 2698 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 12 05:04:15.411273 kubelet[2698]: I0312 05:04:15.411245 2698 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:04:15.412491 kubelet[2698]: I0312 05:04:15.411583 2698 kubelet.go:475] "Attempting to sync node with API server" Mar 12 05:04:15.412491 kubelet[2698]: I0312 05:04:15.411610 2698 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 12 05:04:15.412491 kubelet[2698]: I0312 05:04:15.411658 2698 kubelet.go:387] "Adding apiserver pod source" Mar 12 05:04:15.412491 kubelet[2698]: I0312 05:04:15.411678 2698 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 12 05:04:15.415975 kubelet[2698]: I0312 05:04:15.415948 2698 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 12 05:04:15.416768 kubelet[2698]: I0312 05:04:15.416736 2698 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 12 05:04:15.416846 kubelet[2698]: I0312 05:04:15.416783 2698 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 12 05:04:15.437383 kubelet[2698]: I0312 05:04:15.437350 2698 server.go:1262] "Started kubelet" Mar 12 05:04:15.443660 kubelet[2698]: I0312 05:04:15.442402 2698 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 12 05:04:15.449484 kubelet[2698]: I0312 05:04:15.449168 2698 server.go:310] "Adding debug handlers to kubelet server" Mar 12 05:04:15.451468 kubelet[2698]: I0312 05:04:15.450571 2698 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 12 05:04:15.457194 kubelet[2698]: I0312 05:04:15.456346 2698 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 12 05:04:15.462679 kubelet[2698]: I0312 05:04:15.452065 2698 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 12 05:04:15.463389 kubelet[2698]: I0312 05:04:15.450979 2698 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 12 05:04:15.465184 kubelet[2698]: I0312 05:04:15.462852 2698 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 12 05:04:15.470668 kubelet[2698]: I0312 05:04:15.469629 2698 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 12 05:04:15.473453 kubelet[2698]: I0312 05:04:15.469645 2698 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 12 05:04:15.473912 kubelet[2698]: I0312 05:04:15.473892 2698 reconciler.go:29] "Reconciler: start to sync state" Mar 12 05:04:15.474949 kubelet[2698]: E0312 05:04:15.474902 2698 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 12 05:04:15.476877 kubelet[2698]: I0312 05:04:15.476672 2698 factory.go:223] Registration of the systemd container factory successfully Mar 12 05:04:15.477658 kubelet[2698]: I0312 05:04:15.477627 2698 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 12 05:04:15.485496 kubelet[2698]: I0312 05:04:15.483965 2698 factory.go:223] Registration of the containerd container factory successfully Mar 12 05:04:15.493968 sudo[2715]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 12 05:04:15.494657 sudo[2715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 12 05:04:15.497008 kubelet[2698]: I0312 05:04:15.496972 2698 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 12 05:04:15.524596 kubelet[2698]: I0312 05:04:15.524554 2698 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 12 05:04:15.524596 kubelet[2698]: I0312 05:04:15.524591 2698 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 12 05:04:15.524822 kubelet[2698]: I0312 05:04:15.524624 2698 kubelet.go:2428] "Starting kubelet main sync loop" Mar 12 05:04:15.524822 kubelet[2698]: E0312 05:04:15.524691 2698 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 12 05:04:15.579820 kubelet[2698]: I0312 05:04:15.579760 2698 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 12 05:04:15.579820 kubelet[2698]: I0312 05:04:15.579807 2698 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 12 05:04:15.579820 kubelet[2698]: I0312 05:04:15.579835 2698 state_mem.go:36] "Initialized new in-memory state store" Mar 12 05:04:15.580103 kubelet[2698]: I0312 05:04:15.580046 2698 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 12 05:04:15.580103 kubelet[2698]: I0312 05:04:15.580066 2698 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 12 05:04:15.580103 kubelet[2698]: I0312 05:04:15.580097 2698 policy_none.go:49] "None policy: Start" Mar 12 05:04:15.580226 kubelet[2698]: I0312 05:04:15.580122 2698 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 12 05:04:15.580226 kubelet[2698]: I0312 05:04:15.580142 2698 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 12 05:04:15.581375 kubelet[2698]: I0312 05:04:15.580309 2698 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 12 05:04:15.581375 kubelet[2698]: I0312 05:04:15.580348 2698 policy_none.go:47] "Start" Mar 12 05:04:15.590095 kubelet[2698]: E0312 05:04:15.590059 2698 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 12 05:04:15.590370 kubelet[2698]: I0312 05:04:15.590345 2698 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 12 05:04:15.590433 kubelet[2698]: I0312 05:04:15.590376 2698 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 12 05:04:15.594653 kubelet[2698]: I0312 05:04:15.593783 2698 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 12 05:04:15.595962 kubelet[2698]: E0312 05:04:15.595749 2698 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 12 05:04:15.626067 kubelet[2698]: I0312 05:04:15.626024 2698 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.632117 kubelet[2698]: I0312 05:04:15.630989 2698 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.638547 kubelet[2698]: I0312 05:04:15.638493 2698 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.644435 kubelet[2698]: I0312 05:04:15.644321 2698 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:15.647259 kubelet[2698]: I0312 05:04:15.646698 2698 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:15.647259 kubelet[2698]: E0312 05:04:15.646796 2698 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.649397 kubelet[2698]: I0312 05:04:15.649366 2698 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:15.649560 kubelet[2698]: E0312 05:04:15.649417 2698 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-rxxiv.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.676752 kubelet[2698]: I0312 05:04:15.675283 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71cc9ac79e0f68061ef30de756972704-ca-certs\") pod \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" (UID: \"71cc9ac79e0f68061ef30de756972704\") " pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.676752 kubelet[2698]: I0312 05:04:15.675344 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-ca-certs\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.676752 kubelet[2698]: I0312 05:04:15.675375 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-flexvolume-dir\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.676752 kubelet[2698]: I0312 05:04:15.675407 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-kubeconfig\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.676752 kubelet[2698]: I0312 05:04:15.675473 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.677098 kubelet[2698]: I0312 05:04:15.675513 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71cc9ac79e0f68061ef30de756972704-k8s-certs\") pod \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" (UID: \"71cc9ac79e0f68061ef30de756972704\") " pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.677098 kubelet[2698]: I0312 05:04:15.675547 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71cc9ac79e0f68061ef30de756972704-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" (UID: \"71cc9ac79e0f68061ef30de756972704\") " pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.677098 kubelet[2698]: I0312 05:04:15.675573 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5cb81e4903389095a2895c42b1be3835-k8s-certs\") pod \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" (UID: \"5cb81e4903389095a2895c42b1be3835\") " pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.677098 kubelet[2698]: I0312 05:04:15.675600 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/685f344dc3a2fe2146b38ebec6fc17fc-kubeconfig\") pod \"kube-scheduler-srv-rxxiv.gb1.brightbox.com\" (UID: \"685f344dc3a2fe2146b38ebec6fc17fc\") " pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.720511 kubelet[2698]: I0312 05:04:15.719420 2698 kubelet_node_status.go:75] "Attempting to register node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.731036 kubelet[2698]: I0312 05:04:15.730671 2698 kubelet_node_status.go:124] "Node was previously registered" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:15.731036 kubelet[2698]: I0312 05:04:15.730831 2698 kubelet_node_status.go:78] "Successfully registered node" node="srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:16.413513 kubelet[2698]: I0312 05:04:16.412772 2698 apiserver.go:52] "Watching apiserver" Mar 12 05:04:16.449859 sudo[2715]: pam_unix(sudo:session): session closed for user root Mar 12 05:04:16.474547 kubelet[2698]: I0312 05:04:16.474305 2698 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 12 05:04:16.556082 kubelet[2698]: I0312 05:04:16.555675 2698 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:16.557461 kubelet[2698]: I0312 05:04:16.556935 2698 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:16.559726 kubelet[2698]: I0312 05:04:16.559694 2698 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:16.571842 kubelet[2698]: I0312 05:04:16.571792 2698 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:16.572042 kubelet[2698]: E0312 05:04:16.571882 2698 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-rxxiv.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:16.575746 kubelet[2698]: I0312 05:04:16.575502 2698 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:16.575746 kubelet[2698]: E0312 05:04:16.575555 2698 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-rxxiv.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:16.576580 kubelet[2698]: I0312 05:04:16.576550 2698 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 12 05:04:16.576679 kubelet[2698]: E0312 05:04:16.576624 2698 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-rxxiv.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" Mar 12 05:04:16.621815 kubelet[2698]: I0312 05:04:16.621730 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-rxxiv.gb1.brightbox.com" podStartSLOduration=1.621694473 podStartE2EDuration="1.621694473s" podCreationTimestamp="2026-03-12 05:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:04:16.604670966 +0000 UTC m=+1.369785735" watchObservedRunningTime="2026-03-12 05:04:16.621694473 +0000 UTC m=+1.386809237" Mar 12 05:04:16.622160 kubelet[2698]: I0312 05:04:16.621882 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-rxxiv.gb1.brightbox.com" podStartSLOduration=3.62187378 podStartE2EDuration="3.62187378s" podCreationTimestamp="2026-03-12 05:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:04:16.620013813 +0000 UTC m=+1.385128607" watchObservedRunningTime="2026-03-12 05:04:16.62187378 +0000 UTC m=+1.386988540" Mar 12 05:04:18.347003 sudo[1776]: pam_unix(sudo:session): session closed for user root Mar 12 05:04:18.437128 sshd[1773]: pam_unix(sshd:session): session closed for user core Mar 12 05:04:18.443620 systemd[1]: sshd@9-10.230.35.22:22-20.161.92.111:51148.service: Deactivated successfully. Mar 12 05:04:18.447283 systemd[1]: session-11.scope: Deactivated successfully. Mar 12 05:04:18.447935 systemd[1]: session-11.scope: Consumed 6.583s CPU time, 156.3M memory peak, 0B memory swap peak. Mar 12 05:04:18.450624 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Mar 12 05:04:18.454190 systemd-logind[1479]: Removed session 11. Mar 12 05:04:20.828615 kubelet[2698]: I0312 05:04:20.828379 2698 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 12 05:04:20.831952 containerd[1498]: time="2026-03-12T05:04:20.829617540Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 12 05:04:20.832403 kubelet[2698]: I0312 05:04:20.829851 2698 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 12 05:04:21.797593 kubelet[2698]: I0312 05:04:21.797501 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-rxxiv.gb1.brightbox.com" podStartSLOduration=8.797471355999999 podStartE2EDuration="8.797471356s" podCreationTimestamp="2026-03-12 05:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:04:16.645058996 +0000 UTC m=+1.410173766" watchObservedRunningTime="2026-03-12 05:04:21.797471356 +0000 UTC m=+6.562586118" Mar 12 05:04:21.818977 kubelet[2698]: I0312 05:04:21.815500 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e030fe28-baa7-4f92-960b-9caa3339993c-xtables-lock\") pod \"kube-proxy-6brg6\" (UID: \"e030fe28-baa7-4f92-960b-9caa3339993c\") " pod="kube-system/kube-proxy-6brg6" Mar 12 05:04:21.818977 kubelet[2698]: I0312 05:04:21.815548 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e030fe28-baa7-4f92-960b-9caa3339993c-lib-modules\") pod \"kube-proxy-6brg6\" (UID: \"e030fe28-baa7-4f92-960b-9caa3339993c\") " pod="kube-system/kube-proxy-6brg6" Mar 12 05:04:21.818977 kubelet[2698]: I0312 05:04:21.815580 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98q7\" (UniqueName: \"kubernetes.io/projected/e030fe28-baa7-4f92-960b-9caa3339993c-kube-api-access-p98q7\") pod \"kube-proxy-6brg6\" (UID: \"e030fe28-baa7-4f92-960b-9caa3339993c\") " pod="kube-system/kube-proxy-6brg6" Mar 12 05:04:21.818977 kubelet[2698]: I0312 05:04:21.815613 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e030fe28-baa7-4f92-960b-9caa3339993c-kube-proxy\") pod \"kube-proxy-6brg6\" (UID: \"e030fe28-baa7-4f92-960b-9caa3339993c\") " pod="kube-system/kube-proxy-6brg6" Mar 12 05:04:21.820540 systemd[1]: Created slice kubepods-besteffort-pode030fe28_baa7_4f92_960b_9caa3339993c.slice - libcontainer container kubepods-besteffort-pode030fe28_baa7_4f92_960b_9caa3339993c.slice. Mar 12 05:04:21.841187 systemd[1]: Created slice kubepods-burstable-pod699562fa_7f4b_44d1_b3f5_c7c2a9dca414.slice - libcontainer container kubepods-burstable-pod699562fa_7f4b_44d1_b3f5_c7c2a9dca414.slice. Mar 12 05:04:21.917008 kubelet[2698]: I0312 05:04:21.916939 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-clustermesh-secrets\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920470 kubelet[2698]: I0312 05:04:21.918470 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8fbn\" (UniqueName: \"kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-kube-api-access-h8fbn\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920470 kubelet[2698]: I0312 05:04:21.918563 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hostproc\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920470 kubelet[2698]: I0312 05:04:21.918595 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-cgroup\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920470 kubelet[2698]: I0312 05:04:21.918619 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cni-path\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920470 kubelet[2698]: I0312 05:04:21.918644 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-config-path\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920470 kubelet[2698]: I0312 05:04:21.918680 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-bpf-maps\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920770 kubelet[2698]: I0312 05:04:21.918706 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-lib-modules\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920770 kubelet[2698]: I0312 05:04:21.918733 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-xtables-lock\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920770 kubelet[2698]: I0312 05:04:21.918757 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-kernel\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920770 kubelet[2698]: I0312 05:04:21.918793 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hubble-tls\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920770 kubelet[2698]: I0312 05:04:21.918849 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-run\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.920770 kubelet[2698]: I0312 05:04:21.918875 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-etc-cni-netd\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:21.921127 kubelet[2698]: I0312 05:04:21.918901 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-net\") pod \"cilium-4hjnr\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " pod="kube-system/cilium-4hjnr" Mar 12 05:04:22.086257 systemd[1]: Created slice kubepods-besteffort-podad39d82d_bb57_4f95_b5c3_1921844cc824.slice - libcontainer container kubepods-besteffort-podad39d82d_bb57_4f95_b5c3_1921844cc824.slice. Mar 12 05:04:22.121248 kubelet[2698]: I0312 05:04:22.121191 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad39d82d-bb57-4f95-b5c3-1921844cc824-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-9cdl8\" (UID: \"ad39d82d-bb57-4f95-b5c3-1921844cc824\") " pod="kube-system/cilium-operator-6f9c7c5859-9cdl8" Mar 12 05:04:22.121478 kubelet[2698]: I0312 05:04:22.121294 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlq86\" (UniqueName: \"kubernetes.io/projected/ad39d82d-bb57-4f95-b5c3-1921844cc824-kube-api-access-rlq86\") pod \"cilium-operator-6f9c7c5859-9cdl8\" (UID: \"ad39d82d-bb57-4f95-b5c3-1921844cc824\") " pod="kube-system/cilium-operator-6f9c7c5859-9cdl8" Mar 12 05:04:22.139777 containerd[1498]: time="2026-03-12T05:04:22.139709070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6brg6,Uid:e030fe28-baa7-4f92-960b-9caa3339993c,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:22.150559 containerd[1498]: time="2026-03-12T05:04:22.150515368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hjnr,Uid:699562fa-7f4b-44d1-b3f5-c7c2a9dca414,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:22.192348 containerd[1498]: time="2026-03-12T05:04:22.191321060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:22.192348 containerd[1498]: time="2026-03-12T05:04:22.191501742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:22.192348 containerd[1498]: time="2026-03-12T05:04:22.191529306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:22.193186 containerd[1498]: time="2026-03-12T05:04:22.192578646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:22.193949 containerd[1498]: time="2026-03-12T05:04:22.193692879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:22.193949 containerd[1498]: time="2026-03-12T05:04:22.193773767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:22.193949 containerd[1498]: time="2026-03-12T05:04:22.193791242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:22.196226 containerd[1498]: time="2026-03-12T05:04:22.194072081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:22.235652 systemd[1]: Started cri-containerd-d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc.scope - libcontainer container d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc. Mar 12 05:04:22.245636 systemd[1]: Started cri-containerd-711661f1a05f251069ac3dc995feab3fb5c5cda9d4bf228956d53107d0e50d3f.scope - libcontainer container 711661f1a05f251069ac3dc995feab3fb5c5cda9d4bf228956d53107d0e50d3f. Mar 12 05:04:22.291107 containerd[1498]: time="2026-03-12T05:04:22.290880334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hjnr,Uid:699562fa-7f4b-44d1-b3f5-c7c2a9dca414,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\"" Mar 12 05:04:22.296884 containerd[1498]: time="2026-03-12T05:04:22.296398823Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 12 05:04:22.296884 containerd[1498]: time="2026-03-12T05:04:22.296816575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6brg6,Uid:e030fe28-baa7-4f92-960b-9caa3339993c,Namespace:kube-system,Attempt:0,} returns sandbox id \"711661f1a05f251069ac3dc995feab3fb5c5cda9d4bf228956d53107d0e50d3f\"" Mar 12 05:04:22.304896 containerd[1498]: time="2026-03-12T05:04:22.304849900Z" level=info msg="CreateContainer within sandbox \"711661f1a05f251069ac3dc995feab3fb5c5cda9d4bf228956d53107d0e50d3f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 12 05:04:22.325477 containerd[1498]: time="2026-03-12T05:04:22.324830040Z" level=info msg="CreateContainer within sandbox \"711661f1a05f251069ac3dc995feab3fb5c5cda9d4bf228956d53107d0e50d3f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d628932a60a716ee3515c54c766620968db22d91acaef86c3c28eccc04730f97\"" Mar 12 05:04:22.329033 containerd[1498]: time="2026-03-12T05:04:22.328965457Z" level=info msg="StartContainer for \"d628932a60a716ee3515c54c766620968db22d91acaef86c3c28eccc04730f97\"" Mar 12 05:04:22.367668 systemd[1]: Started cri-containerd-d628932a60a716ee3515c54c766620968db22d91acaef86c3c28eccc04730f97.scope - libcontainer container d628932a60a716ee3515c54c766620968db22d91acaef86c3c28eccc04730f97. Mar 12 05:04:22.397345 containerd[1498]: time="2026-03-12T05:04:22.397229628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-9cdl8,Uid:ad39d82d-bb57-4f95-b5c3-1921844cc824,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:22.416030 containerd[1498]: time="2026-03-12T05:04:22.415975063Z" level=info msg="StartContainer for \"d628932a60a716ee3515c54c766620968db22d91acaef86c3c28eccc04730f97\" returns successfully" Mar 12 05:04:22.447033 containerd[1498]: time="2026-03-12T05:04:22.446837381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:22.449467 containerd[1498]: time="2026-03-12T05:04:22.447545763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:22.449467 containerd[1498]: time="2026-03-12T05:04:22.447638667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:22.449467 containerd[1498]: time="2026-03-12T05:04:22.447911129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:22.475669 systemd[1]: Started cri-containerd-fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79.scope - libcontainer container fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79. Mar 12 05:04:22.554587 containerd[1498]: time="2026-03-12T05:04:22.554536084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-9cdl8,Uid:ad39d82d-bb57-4f95-b5c3-1921844cc824,Namespace:kube-system,Attempt:0,} returns sandbox id \"fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79\"" Mar 12 05:04:23.927060 kubelet[2698]: I0312 05:04:23.926924 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6brg6" podStartSLOduration=2.926903795 podStartE2EDuration="2.926903795s" podCreationTimestamp="2026-03-12 05:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:04:22.607551805 +0000 UTC m=+7.372666574" watchObservedRunningTime="2026-03-12 05:04:23.926903795 +0000 UTC m=+8.692018559" Mar 12 05:04:32.987166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128574709.mount: Deactivated successfully. Mar 12 05:04:36.300006 containerd[1498]: time="2026-03-12T05:04:36.285272324Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 12 05:04:36.314290 containerd[1498]: time="2026-03-12T05:04:36.314235807Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.017717378s" Mar 12 05:04:36.314535 containerd[1498]: time="2026-03-12T05:04:36.314503039Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 12 05:04:36.320665 containerd[1498]: time="2026-03-12T05:04:36.319840189Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:04:36.321708 containerd[1498]: time="2026-03-12T05:04:36.321085327Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:04:36.330033 containerd[1498]: time="2026-03-12T05:04:36.329985530Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 12 05:04:36.331693 containerd[1498]: time="2026-03-12T05:04:36.331657627Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 05:04:36.456228 containerd[1498]: time="2026-03-12T05:04:36.456112869Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\"" Mar 12 05:04:36.458153 containerd[1498]: time="2026-03-12T05:04:36.456980807Z" level=info msg="StartContainer for \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\"" Mar 12 05:04:36.565716 systemd[1]: Started cri-containerd-6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f.scope - libcontainer container 6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f. Mar 12 05:04:36.617369 containerd[1498]: time="2026-03-12T05:04:36.617303836Z" level=info msg="StartContainer for \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\" returns successfully" Mar 12 05:04:36.647406 systemd[1]: cri-containerd-6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f.scope: Deactivated successfully. Mar 12 05:04:36.876100 containerd[1498]: time="2026-03-12T05:04:36.869323409Z" level=info msg="shim disconnected" id=6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f namespace=k8s.io Mar 12 05:04:36.876100 containerd[1498]: time="2026-03-12T05:04:36.876085698Z" level=warning msg="cleaning up after shim disconnected" id=6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f namespace=k8s.io Mar 12 05:04:36.876617 containerd[1498]: time="2026-03-12T05:04:36.876113108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:04:37.423256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f-rootfs.mount: Deactivated successfully. Mar 12 05:04:37.642410 containerd[1498]: time="2026-03-12T05:04:37.642360085Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 05:04:37.701197 containerd[1498]: time="2026-03-12T05:04:37.700850302Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\"" Mar 12 05:04:37.702471 containerd[1498]: time="2026-03-12T05:04:37.702138546Z" level=info msg="StartContainer for \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\"" Mar 12 05:04:37.763576 systemd[1]: Started cri-containerd-b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643.scope - libcontainer container b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643. Mar 12 05:04:37.830887 containerd[1498]: time="2026-03-12T05:04:37.830797343Z" level=info msg="StartContainer for \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\" returns successfully" Mar 12 05:04:37.851775 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 12 05:04:37.852622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:04:37.852927 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 12 05:04:37.861345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 12 05:04:37.862191 systemd[1]: cri-containerd-b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643.scope: Deactivated successfully. Mar 12 05:04:37.922152 containerd[1498]: time="2026-03-12T05:04:37.921919905Z" level=info msg="shim disconnected" id=b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643 namespace=k8s.io Mar 12 05:04:37.922152 containerd[1498]: time="2026-03-12T05:04:37.921988030Z" level=warning msg="cleaning up after shim disconnected" id=b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643 namespace=k8s.io Mar 12 05:04:37.922152 containerd[1498]: time="2026-03-12T05:04:37.922005791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:04:37.955828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 12 05:04:38.424717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643-rootfs.mount: Deactivated successfully. Mar 12 05:04:38.658833 containerd[1498]: time="2026-03-12T05:04:38.658429821Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 05:04:38.703969 containerd[1498]: time="2026-03-12T05:04:38.703796618Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\"" Mar 12 05:04:38.705755 containerd[1498]: time="2026-03-12T05:04:38.704720886Z" level=info msg="StartContainer for \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\"" Mar 12 05:04:38.787662 systemd[1]: Started cri-containerd-d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3.scope - libcontainer container d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3. Mar 12 05:04:38.862004 containerd[1498]: time="2026-03-12T05:04:38.861166656Z" level=info msg="StartContainer for \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\" returns successfully" Mar 12 05:04:38.866924 systemd[1]: cri-containerd-d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3.scope: Deactivated successfully. Mar 12 05:04:38.922919 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3-rootfs.mount: Deactivated successfully. Mar 12 05:04:39.051679 containerd[1498]: time="2026-03-12T05:04:39.051478116Z" level=info msg="shim disconnected" id=d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3 namespace=k8s.io Mar 12 05:04:39.051679 containerd[1498]: time="2026-03-12T05:04:39.051563827Z" level=warning msg="cleaning up after shim disconnected" id=d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3 namespace=k8s.io Mar 12 05:04:39.051679 containerd[1498]: time="2026-03-12T05:04:39.051579767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:04:39.344674 containerd[1498]: time="2026-03-12T05:04:39.343200313Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:04:39.346185 containerd[1498]: time="2026-03-12T05:04:39.346151613Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 12 05:04:39.347191 containerd[1498]: time="2026-03-12T05:04:39.347159198Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 12 05:04:39.350714 containerd[1498]: time="2026-03-12T05:04:39.350677283Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.020419139s" Mar 12 05:04:39.350866 containerd[1498]: time="2026-03-12T05:04:39.350834785Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 12 05:04:39.358480 containerd[1498]: time="2026-03-12T05:04:39.358418748Z" level=info msg="CreateContainer within sandbox \"fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 12 05:04:39.383965 containerd[1498]: time="2026-03-12T05:04:39.383889169Z" level=info msg="CreateContainer within sandbox \"fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\"" Mar 12 05:04:39.386611 containerd[1498]: time="2026-03-12T05:04:39.386568414Z" level=info msg="StartContainer for \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\"" Mar 12 05:04:39.425669 systemd[1]: Started cri-containerd-d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d.scope - libcontainer container d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d. Mar 12 05:04:39.476247 containerd[1498]: time="2026-03-12T05:04:39.475413412Z" level=info msg="StartContainer for \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\" returns successfully" Mar 12 05:04:39.673491 containerd[1498]: time="2026-03-12T05:04:39.673314632Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 05:04:39.727035 kubelet[2698]: I0312 05:04:39.722481 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-9cdl8" podStartSLOduration=0.929548672 podStartE2EDuration="17.722455294s" podCreationTimestamp="2026-03-12 05:04:22 +0000 UTC" firstStartedPulling="2026-03-12 05:04:22.558766474 +0000 UTC m=+7.323881229" lastFinishedPulling="2026-03-12 05:04:39.351673095 +0000 UTC m=+24.116787851" observedRunningTime="2026-03-12 05:04:39.721919652 +0000 UTC m=+24.487034421" watchObservedRunningTime="2026-03-12 05:04:39.722455294 +0000 UTC m=+24.487570055" Mar 12 05:04:39.723494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161869018.mount: Deactivated successfully. Mar 12 05:04:39.732630 containerd[1498]: time="2026-03-12T05:04:39.731913357Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\"" Mar 12 05:04:39.734086 containerd[1498]: time="2026-03-12T05:04:39.734035765Z" level=info msg="StartContainer for \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\"" Mar 12 05:04:39.824827 systemd[1]: Started cri-containerd-939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee.scope - libcontainer container 939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee. Mar 12 05:04:39.928403 systemd[1]: cri-containerd-939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee.scope: Deactivated successfully. Mar 12 05:04:39.932467 containerd[1498]: time="2026-03-12T05:04:39.932173479Z" level=info msg="StartContainer for \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\" returns successfully" Mar 12 05:04:40.014520 containerd[1498]: time="2026-03-12T05:04:40.014170604Z" level=info msg="shim disconnected" id=939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee namespace=k8s.io Mar 12 05:04:40.014520 containerd[1498]: time="2026-03-12T05:04:40.014253706Z" level=warning msg="cleaning up after shim disconnected" id=939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee namespace=k8s.io Mar 12 05:04:40.014520 containerd[1498]: time="2026-03-12T05:04:40.014275466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:04:40.429211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee-rootfs.mount: Deactivated successfully. Mar 12 05:04:40.676941 containerd[1498]: time="2026-03-12T05:04:40.676577158Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 05:04:40.700801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180880862.mount: Deactivated successfully. Mar 12 05:04:40.706389 containerd[1498]: time="2026-03-12T05:04:40.706264792Z" level=info msg="CreateContainer within sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\"" Mar 12 05:04:40.707431 containerd[1498]: time="2026-03-12T05:04:40.707323818Z" level=info msg="StartContainer for \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\"" Mar 12 05:04:40.757978 systemd[1]: Started cri-containerd-7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03.scope - libcontainer container 7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03. Mar 12 05:04:40.806931 containerd[1498]: time="2026-03-12T05:04:40.806870051Z" level=info msg="StartContainer for \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\" returns successfully" Mar 12 05:04:41.063352 kubelet[2698]: I0312 05:04:41.062610 2698 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 12 05:04:41.118479 systemd[1]: Created slice kubepods-burstable-podd073b63f_e4bf_4ecc_b9ba_5416796dac37.slice - libcontainer container kubepods-burstable-podd073b63f_e4bf_4ecc_b9ba_5416796dac37.slice. Mar 12 05:04:41.136033 systemd[1]: Created slice kubepods-burstable-pod2a1cac84_0537_4fbc_9fb7_10c8b227cd02.slice - libcontainer container kubepods-burstable-pod2a1cac84_0537_4fbc_9fb7_10c8b227cd02.slice. Mar 12 05:04:41.154672 kubelet[2698]: I0312 05:04:41.154581 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d073b63f-e4bf-4ecc-b9ba-5416796dac37-config-volume\") pod \"coredns-66bc5c9577-j6gsq\" (UID: \"d073b63f-e4bf-4ecc-b9ba-5416796dac37\") " pod="kube-system/coredns-66bc5c9577-j6gsq" Mar 12 05:04:41.154672 kubelet[2698]: I0312 05:04:41.154639 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qssmw\" (UniqueName: \"kubernetes.io/projected/d073b63f-e4bf-4ecc-b9ba-5416796dac37-kube-api-access-qssmw\") pod \"coredns-66bc5c9577-j6gsq\" (UID: \"d073b63f-e4bf-4ecc-b9ba-5416796dac37\") " pod="kube-system/coredns-66bc5c9577-j6gsq" Mar 12 05:04:41.154672 kubelet[2698]: I0312 05:04:41.154676 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a1cac84-0537-4fbc-9fb7-10c8b227cd02-config-volume\") pod \"coredns-66bc5c9577-6hjsv\" (UID: \"2a1cac84-0537-4fbc-9fb7-10c8b227cd02\") " pod="kube-system/coredns-66bc5c9577-6hjsv" Mar 12 05:04:41.155034 kubelet[2698]: I0312 05:04:41.154709 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsm4r\" (UniqueName: \"kubernetes.io/projected/2a1cac84-0537-4fbc-9fb7-10c8b227cd02-kube-api-access-wsm4r\") pod \"coredns-66bc5c9577-6hjsv\" (UID: \"2a1cac84-0537-4fbc-9fb7-10c8b227cd02\") " pod="kube-system/coredns-66bc5c9577-6hjsv" Mar 12 05:04:41.438123 containerd[1498]: time="2026-03-12T05:04:41.437213459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j6gsq,Uid:d073b63f-e4bf-4ecc-b9ba-5416796dac37,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:41.445605 containerd[1498]: time="2026-03-12T05:04:41.445565652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6hjsv,Uid:2a1cac84-0537-4fbc-9fb7-10c8b227cd02,Namespace:kube-system,Attempt:0,}" Mar 12 05:04:43.837501 systemd-networkd[1429]: cilium_host: Link UP Mar 12 05:04:43.840190 systemd-networkd[1429]: cilium_net: Link UP Mar 12 05:04:43.840846 systemd-networkd[1429]: cilium_net: Gained carrier Mar 12 05:04:43.841205 systemd-networkd[1429]: cilium_host: Gained carrier Mar 12 05:04:43.928644 systemd-networkd[1429]: cilium_host: Gained IPv6LL Mar 12 05:04:43.976646 systemd-networkd[1429]: cilium_net: Gained IPv6LL Mar 12 05:04:44.032198 systemd-networkd[1429]: cilium_vxlan: Link UP Mar 12 05:04:44.032209 systemd-networkd[1429]: cilium_vxlan: Gained carrier Mar 12 05:04:44.542231 systemd[1]: Started sshd@10-10.230.35.22:22-167.99.88.178:56464.service - OpenSSH per-connection server daemon (167.99.88.178:56464). Mar 12 05:04:44.620483 kernel: NET: Registered PF_ALG protocol family Mar 12 05:04:45.801430 systemd-networkd[1429]: lxc_health: Link UP Mar 12 05:04:45.812046 systemd-networkd[1429]: lxc_health: Gained carrier Mar 12 05:04:45.856331 sshd[3629]: Connection closed by authenticating user root 167.99.88.178 port 56464 [preauth] Mar 12 05:04:45.867495 systemd[1]: sshd@10-10.230.35.22:22-167.99.88.178:56464.service: Deactivated successfully. Mar 12 05:04:45.920799 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL Mar 12 05:04:46.103073 systemd-networkd[1429]: lxc75e8366c065a: Link UP Mar 12 05:04:46.115520 kernel: eth0: renamed from tmp52b6c Mar 12 05:04:46.128046 systemd-networkd[1429]: lxc75e8366c065a: Gained carrier Mar 12 05:04:46.230095 kubelet[2698]: I0312 05:04:46.228452 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4hjnr" podStartSLOduration=11.196851318 podStartE2EDuration="25.228354865s" podCreationTimestamp="2026-03-12 05:04:21 +0000 UTC" firstStartedPulling="2026-03-12 05:04:22.295321563 +0000 UTC m=+7.060436319" lastFinishedPulling="2026-03-12 05:04:36.32682511 +0000 UTC m=+21.091939866" observedRunningTime="2026-03-12 05:04:41.795354626 +0000 UTC m=+26.560469395" watchObservedRunningTime="2026-03-12 05:04:46.228354865 +0000 UTC m=+30.993469626" Mar 12 05:04:46.246925 systemd-networkd[1429]: lxc23c837062554: Link UP Mar 12 05:04:46.266574 kernel: eth0: renamed from tmp5359e Mar 12 05:04:46.276820 systemd-networkd[1429]: lxc23c837062554: Gained carrier Mar 12 05:04:47.136810 systemd-networkd[1429]: lxc_health: Gained IPv6LL Mar 12 05:04:47.200751 systemd-networkd[1429]: lxc75e8366c065a: Gained IPv6LL Mar 12 05:04:48.161742 systemd-networkd[1429]: lxc23c837062554: Gained IPv6LL Mar 12 05:04:52.234493 containerd[1498]: time="2026-03-12T05:04:52.232838190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:52.234493 containerd[1498]: time="2026-03-12T05:04:52.232996473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:52.234493 containerd[1498]: time="2026-03-12T05:04:52.233022619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:52.234493 containerd[1498]: time="2026-03-12T05:04:52.233195485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:52.294891 systemd[1]: Started cri-containerd-52b6cd744ea0539236ecaa54407eea37e99e877db35ce1c65c1e2cc3607b964b.scope - libcontainer container 52b6cd744ea0539236ecaa54407eea37e99e877db35ce1c65c1e2cc3607b964b. Mar 12 05:04:52.306042 containerd[1498]: time="2026-03-12T05:04:52.305885924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:04:52.307584 containerd[1498]: time="2026-03-12T05:04:52.306026744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:04:52.307584 containerd[1498]: time="2026-03-12T05:04:52.306302750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:52.309482 containerd[1498]: time="2026-03-12T05:04:52.308670939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:04:52.351656 systemd[1]: Started cri-containerd-5359e380f154da0f5b0f8e0f82fd2d0171bd3e55a838330d1bc63b7fe084aa04.scope - libcontainer container 5359e380f154da0f5b0f8e0f82fd2d0171bd3e55a838330d1bc63b7fe084aa04. Mar 12 05:04:52.442067 containerd[1498]: time="2026-03-12T05:04:52.441840634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6hjsv,Uid:2a1cac84-0537-4fbc-9fb7-10c8b227cd02,Namespace:kube-system,Attempt:0,} returns sandbox id \"52b6cd744ea0539236ecaa54407eea37e99e877db35ce1c65c1e2cc3607b964b\"" Mar 12 05:04:52.457506 containerd[1498]: time="2026-03-12T05:04:52.456011880Z" level=info msg="CreateContainer within sandbox \"52b6cd744ea0539236ecaa54407eea37e99e877db35ce1c65c1e2cc3607b964b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 05:04:52.491172 containerd[1498]: time="2026-03-12T05:04:52.490995844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-j6gsq,Uid:d073b63f-e4bf-4ecc-b9ba-5416796dac37,Namespace:kube-system,Attempt:0,} returns sandbox id \"5359e380f154da0f5b0f8e0f82fd2d0171bd3e55a838330d1bc63b7fe084aa04\"" Mar 12 05:04:52.496963 containerd[1498]: time="2026-03-12T05:04:52.496823695Z" level=info msg="CreateContainer within sandbox \"52b6cd744ea0539236ecaa54407eea37e99e877db35ce1c65c1e2cc3607b964b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"392cf1c1e1d19ec6eba4f6e101a5a4030b88ccff70ef90adf7211ad047db60cb\"" Mar 12 05:04:52.499267 containerd[1498]: time="2026-03-12T05:04:52.499226913Z" level=info msg="StartContainer for \"392cf1c1e1d19ec6eba4f6e101a5a4030b88ccff70ef90adf7211ad047db60cb\"" Mar 12 05:04:52.504591 containerd[1498]: time="2026-03-12T05:04:52.504540243Z" level=info msg="CreateContainer within sandbox \"5359e380f154da0f5b0f8e0f82fd2d0171bd3e55a838330d1bc63b7fe084aa04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 12 05:04:52.548017 containerd[1498]: time="2026-03-12T05:04:52.547833886Z" level=info msg="CreateContainer within sandbox \"5359e380f154da0f5b0f8e0f82fd2d0171bd3e55a838330d1bc63b7fe084aa04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c3ac67c73060a465b8a99f322688beb817592e2ab8d88d1116f4404190d94fa\"" Mar 12 05:04:52.550070 containerd[1498]: time="2026-03-12T05:04:52.549839053Z" level=info msg="StartContainer for \"3c3ac67c73060a465b8a99f322688beb817592e2ab8d88d1116f4404190d94fa\"" Mar 12 05:04:52.572797 systemd[1]: Started cri-containerd-392cf1c1e1d19ec6eba4f6e101a5a4030b88ccff70ef90adf7211ad047db60cb.scope - libcontainer container 392cf1c1e1d19ec6eba4f6e101a5a4030b88ccff70ef90adf7211ad047db60cb. Mar 12 05:04:52.611684 systemd[1]: Started cri-containerd-3c3ac67c73060a465b8a99f322688beb817592e2ab8d88d1116f4404190d94fa.scope - libcontainer container 3c3ac67c73060a465b8a99f322688beb817592e2ab8d88d1116f4404190d94fa. Mar 12 05:04:52.646652 containerd[1498]: time="2026-03-12T05:04:52.646573037Z" level=info msg="StartContainer for \"392cf1c1e1d19ec6eba4f6e101a5a4030b88ccff70ef90adf7211ad047db60cb\" returns successfully" Mar 12 05:04:52.674209 containerd[1498]: time="2026-03-12T05:04:52.674118982Z" level=info msg="StartContainer for \"3c3ac67c73060a465b8a99f322688beb817592e2ab8d88d1116f4404190d94fa\" returns successfully" Mar 12 05:04:52.769863 kubelet[2698]: I0312 05:04:52.769645 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6hjsv" podStartSLOduration=30.76961633 podStartE2EDuration="30.76961633s" podCreationTimestamp="2026-03-12 05:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:04:52.764859411 +0000 UTC m=+37.529974181" watchObservedRunningTime="2026-03-12 05:04:52.76961633 +0000 UTC m=+37.534731094" Mar 12 05:04:52.788384 kubelet[2698]: I0312 05:04:52.787896 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-j6gsq" podStartSLOduration=30.78772385 podStartE2EDuration="30.78772385s" podCreationTimestamp="2026-03-12 05:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:04:52.787244464 +0000 UTC m=+37.552359231" watchObservedRunningTime="2026-03-12 05:04:52.78772385 +0000 UTC m=+37.552838614" Mar 12 05:04:53.244174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903871068.mount: Deactivated successfully. Mar 12 05:05:21.664775 systemd[1]: Started sshd@11-10.230.35.22:22-20.161.92.111:44626.service - OpenSSH per-connection server daemon (20.161.92.111:44626). Mar 12 05:05:22.248679 sshd[4096]: Accepted publickey for core from 20.161.92.111 port 44626 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:22.251568 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:22.266708 systemd-logind[1479]: New session 12 of user core. Mar 12 05:05:22.274727 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 12 05:05:23.236100 sshd[4096]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:23.240338 systemd[1]: sshd@11-10.230.35.22:22-20.161.92.111:44626.service: Deactivated successfully. Mar 12 05:05:23.243024 systemd[1]: session-12.scope: Deactivated successfully. Mar 12 05:05:23.245548 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Mar 12 05:05:23.247010 systemd-logind[1479]: Removed session 12. Mar 12 05:05:28.339882 systemd[1]: Started sshd@12-10.230.35.22:22-20.161.92.111:44628.service - OpenSSH per-connection server daemon (20.161.92.111:44628). Mar 12 05:05:28.902385 sshd[4114]: Accepted publickey for core from 20.161.92.111 port 44628 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:28.904546 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:28.913718 systemd-logind[1479]: New session 13 of user core. Mar 12 05:05:28.916667 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 12 05:05:29.399587 sshd[4114]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:29.404811 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Mar 12 05:05:29.405565 systemd[1]: sshd@12-10.230.35.22:22-20.161.92.111:44628.service: Deactivated successfully. Mar 12 05:05:29.408445 systemd[1]: session-13.scope: Deactivated successfully. Mar 12 05:05:29.411482 systemd-logind[1479]: Removed session 13. Mar 12 05:05:34.509843 systemd[1]: Started sshd@13-10.230.35.22:22-20.161.92.111:50250.service - OpenSSH per-connection server daemon (20.161.92.111:50250). Mar 12 05:05:35.082115 sshd[4129]: Accepted publickey for core from 20.161.92.111 port 50250 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:35.084333 sshd[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:35.090893 systemd-logind[1479]: New session 14 of user core. Mar 12 05:05:35.102703 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 12 05:05:35.580529 sshd[4129]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:35.585397 systemd[1]: sshd@13-10.230.35.22:22-20.161.92.111:50250.service: Deactivated successfully. Mar 12 05:05:35.588743 systemd[1]: session-14.scope: Deactivated successfully. Mar 12 05:05:35.590705 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Mar 12 05:05:35.592279 systemd-logind[1479]: Removed session 14. Mar 12 05:05:40.683335 systemd[1]: Started sshd@14-10.230.35.22:22-20.161.92.111:41090.service - OpenSSH per-connection server daemon (20.161.92.111:41090). Mar 12 05:05:41.250221 sshd[4143]: Accepted publickey for core from 20.161.92.111 port 41090 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:41.252912 sshd[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:41.260713 systemd-logind[1479]: New session 15 of user core. Mar 12 05:05:41.270821 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 12 05:05:41.756076 sshd[4143]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:41.762979 systemd[1]: sshd@14-10.230.35.22:22-20.161.92.111:41090.service: Deactivated successfully. Mar 12 05:05:41.765961 systemd[1]: session-15.scope: Deactivated successfully. Mar 12 05:05:41.767324 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Mar 12 05:05:41.769335 systemd-logind[1479]: Removed session 15. Mar 12 05:05:41.867800 systemd[1]: Started sshd@15-10.230.35.22:22-20.161.92.111:41106.service - OpenSSH per-connection server daemon (20.161.92.111:41106). Mar 12 05:05:42.421931 sshd[4157]: Accepted publickey for core from 20.161.92.111 port 41106 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:42.425251 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:42.432993 systemd-logind[1479]: New session 16 of user core. Mar 12 05:05:42.438710 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 12 05:05:42.981427 sshd[4157]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:42.988011 systemd[1]: sshd@15-10.230.35.22:22-20.161.92.111:41106.service: Deactivated successfully. Mar 12 05:05:42.991360 systemd[1]: session-16.scope: Deactivated successfully. Mar 12 05:05:42.992911 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Mar 12 05:05:42.994883 systemd-logind[1479]: Removed session 16. Mar 12 05:05:43.084838 systemd[1]: Started sshd@16-10.230.35.22:22-20.161.92.111:41110.service - OpenSSH per-connection server daemon (20.161.92.111:41110). Mar 12 05:05:43.649133 sshd[4168]: Accepted publickey for core from 20.161.92.111 port 41110 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:43.651754 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:43.658966 systemd-logind[1479]: New session 17 of user core. Mar 12 05:05:43.668682 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 12 05:05:44.134993 sshd[4168]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:44.141081 systemd[1]: sshd@16-10.230.35.22:22-20.161.92.111:41110.service: Deactivated successfully. Mar 12 05:05:44.145285 systemd[1]: session-17.scope: Deactivated successfully. Mar 12 05:05:44.147062 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Mar 12 05:05:44.148403 systemd-logind[1479]: Removed session 17. Mar 12 05:05:49.240894 systemd[1]: Started sshd@17-10.230.35.22:22-20.161.92.111:41126.service - OpenSSH per-connection server daemon (20.161.92.111:41126). Mar 12 05:05:49.800507 sshd[4181]: Accepted publickey for core from 20.161.92.111 port 41126 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:49.802768 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:49.811140 systemd-logind[1479]: New session 18 of user core. Mar 12 05:05:49.818780 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 12 05:05:50.287190 sshd[4181]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:50.293606 systemd[1]: sshd@17-10.230.35.22:22-20.161.92.111:41126.service: Deactivated successfully. Mar 12 05:05:50.296975 systemd[1]: session-18.scope: Deactivated successfully. Mar 12 05:05:50.298581 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Mar 12 05:05:50.300036 systemd-logind[1479]: Removed session 18. Mar 12 05:05:55.391933 systemd[1]: Started sshd@18-10.230.35.22:22-20.161.92.111:50216.service - OpenSSH per-connection server daemon (20.161.92.111:50216). Mar 12 05:05:55.943531 sshd[4196]: Accepted publickey for core from 20.161.92.111 port 50216 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:55.946032 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:55.956225 systemd-logind[1479]: New session 19 of user core. Mar 12 05:05:55.963416 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 12 05:05:56.426621 sshd[4196]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:56.433196 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Mar 12 05:05:56.434615 systemd[1]: sshd@18-10.230.35.22:22-20.161.92.111:50216.service: Deactivated successfully. Mar 12 05:05:56.438255 systemd[1]: session-19.scope: Deactivated successfully. Mar 12 05:05:56.439916 systemd-logind[1479]: Removed session 19. Mar 12 05:05:56.534388 systemd[1]: Started sshd@19-10.230.35.22:22-20.161.92.111:50230.service - OpenSSH per-connection server daemon (20.161.92.111:50230). Mar 12 05:05:57.083186 sshd[4209]: Accepted publickey for core from 20.161.92.111 port 50230 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:57.084063 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:57.091972 systemd-logind[1479]: New session 20 of user core. Mar 12 05:05:57.095678 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 12 05:05:57.954675 sshd[4209]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:57.962943 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Mar 12 05:05:57.964220 systemd[1]: sshd@19-10.230.35.22:22-20.161.92.111:50230.service: Deactivated successfully. Mar 12 05:05:57.967881 systemd[1]: session-20.scope: Deactivated successfully. Mar 12 05:05:57.969903 systemd-logind[1479]: Removed session 20. Mar 12 05:05:58.059860 systemd[1]: Started sshd@20-10.230.35.22:22-20.161.92.111:50236.service - OpenSSH per-connection server daemon (20.161.92.111:50236). Mar 12 05:05:58.632492 sshd[4220]: Accepted publickey for core from 20.161.92.111 port 50236 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:05:58.634772 sshd[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:05:58.641568 systemd-logind[1479]: New session 21 of user core. Mar 12 05:05:58.649738 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 12 05:05:59.834856 sshd[4220]: pam_unix(sshd:session): session closed for user core Mar 12 05:05:59.847252 systemd[1]: sshd@20-10.230.35.22:22-20.161.92.111:50236.service: Deactivated successfully. Mar 12 05:05:59.851036 systemd[1]: session-21.scope: Deactivated successfully. Mar 12 05:05:59.853937 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Mar 12 05:05:59.856012 systemd-logind[1479]: Removed session 21. Mar 12 05:05:59.939900 systemd[1]: Started sshd@21-10.230.35.22:22-20.161.92.111:50246.service - OpenSSH per-connection server daemon (20.161.92.111:50246). Mar 12 05:06:00.514636 sshd[4236]: Accepted publickey for core from 20.161.92.111 port 50246 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:00.518857 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:00.526230 systemd-logind[1479]: New session 22 of user core. Mar 12 05:06:00.533735 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 12 05:06:01.205797 sshd[4236]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:01.211485 systemd[1]: sshd@21-10.230.35.22:22-20.161.92.111:50246.service: Deactivated successfully. Mar 12 05:06:01.215259 systemd[1]: session-22.scope: Deactivated successfully. Mar 12 05:06:01.218074 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Mar 12 05:06:01.219640 systemd-logind[1479]: Removed session 22. Mar 12 05:06:01.321324 systemd[1]: Started sshd@22-10.230.35.22:22-20.161.92.111:54698.service - OpenSSH per-connection server daemon (20.161.92.111:54698). Mar 12 05:06:01.894527 sshd[4250]: Accepted publickey for core from 20.161.92.111 port 54698 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:01.896258 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:01.903880 systemd-logind[1479]: New session 23 of user core. Mar 12 05:06:01.913695 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 12 05:06:02.383942 sshd[4250]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:02.389380 systemd[1]: sshd@22-10.230.35.22:22-20.161.92.111:54698.service: Deactivated successfully. Mar 12 05:06:02.393083 systemd[1]: session-23.scope: Deactivated successfully. Mar 12 05:06:02.396009 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Mar 12 05:06:02.398011 systemd-logind[1479]: Removed session 23. Mar 12 05:06:07.484798 systemd[1]: Started sshd@23-10.230.35.22:22-20.161.92.111:54706.service - OpenSSH per-connection server daemon (20.161.92.111:54706). Mar 12 05:06:08.046818 sshd[4262]: Accepted publickey for core from 20.161.92.111 port 54706 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:08.049222 sshd[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:08.056877 systemd-logind[1479]: New session 24 of user core. Mar 12 05:06:08.065693 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 12 05:06:08.515677 sshd[4262]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:08.521008 systemd[1]: sshd@23-10.230.35.22:22-20.161.92.111:54706.service: Deactivated successfully. Mar 12 05:06:08.523593 systemd[1]: session-24.scope: Deactivated successfully. Mar 12 05:06:08.524607 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Mar 12 05:06:08.526180 systemd-logind[1479]: Removed session 24. Mar 12 05:06:13.618848 systemd[1]: Started sshd@24-10.230.35.22:22-20.161.92.111:57790.service - OpenSSH per-connection server daemon (20.161.92.111:57790). Mar 12 05:06:14.169323 sshd[4277]: Accepted publickey for core from 20.161.92.111 port 57790 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:14.171584 sshd[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:14.180701 systemd-logind[1479]: New session 25 of user core. Mar 12 05:06:14.184913 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 12 05:06:14.651154 sshd[4277]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:14.659934 systemd[1]: sshd@24-10.230.35.22:22-20.161.92.111:57790.service: Deactivated successfully. Mar 12 05:06:14.665500 systemd[1]: session-25.scope: Deactivated successfully. Mar 12 05:06:14.667364 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. Mar 12 05:06:14.669518 systemd-logind[1479]: Removed session 25. Mar 12 05:06:19.754773 systemd[1]: Started sshd@25-10.230.35.22:22-20.161.92.111:57798.service - OpenSSH per-connection server daemon (20.161.92.111:57798). Mar 12 05:06:20.320803 sshd[4292]: Accepted publickey for core from 20.161.92.111 port 57798 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:20.323262 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:20.330398 systemd-logind[1479]: New session 26 of user core. Mar 12 05:06:20.335647 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 12 05:06:20.812045 sshd[4292]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:20.818683 systemd[1]: sshd@25-10.230.35.22:22-20.161.92.111:57798.service: Deactivated successfully. Mar 12 05:06:20.821825 systemd[1]: session-26.scope: Deactivated successfully. Mar 12 05:06:20.823133 systemd-logind[1479]: Session 26 logged out. Waiting for processes to exit. Mar 12 05:06:20.824935 systemd-logind[1479]: Removed session 26. Mar 12 05:06:20.917805 systemd[1]: Started sshd@26-10.230.35.22:22-20.161.92.111:51930.service - OpenSSH per-connection server daemon (20.161.92.111:51930). Mar 12 05:06:21.474355 sshd[4305]: Accepted publickey for core from 20.161.92.111 port 51930 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:21.476779 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:21.485621 systemd-logind[1479]: New session 27 of user core. Mar 12 05:06:21.495673 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 12 05:06:23.907061 containerd[1498]: time="2026-03-12T05:06:23.906965578Z" level=info msg="StopContainer for \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\" with timeout 30 (s)" Mar 12 05:06:23.910334 containerd[1498]: time="2026-03-12T05:06:23.909931283Z" level=info msg="Stop container \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\" with signal terminated" Mar 12 05:06:23.944840 containerd[1498]: time="2026-03-12T05:06:23.944739660Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 12 05:06:23.963425 containerd[1498]: time="2026-03-12T05:06:23.963337185Z" level=info msg="StopContainer for \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\" with timeout 2 (s)" Mar 12 05:06:23.963797 containerd[1498]: time="2026-03-12T05:06:23.963743602Z" level=info msg="Stop container \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\" with signal terminated" Mar 12 05:06:23.988156 systemd-networkd[1429]: lxc_health: Link DOWN Mar 12 05:06:23.988187 systemd-networkd[1429]: lxc_health: Lost carrier Mar 12 05:06:24.003770 systemd[1]: cri-containerd-d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d.scope: Deactivated successfully. Mar 12 05:06:24.023174 systemd[1]: cri-containerd-7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03.scope: Deactivated successfully. Mar 12 05:06:24.024267 systemd[1]: cri-containerd-7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03.scope: Consumed 10.742s CPU time. Mar 12 05:06:24.057938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d-rootfs.mount: Deactivated successfully. Mar 12 05:06:24.064602 containerd[1498]: time="2026-03-12T05:06:24.064171263Z" level=info msg="shim disconnected" id=d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d namespace=k8s.io Mar 12 05:06:24.065046 containerd[1498]: time="2026-03-12T05:06:24.064566906Z" level=warning msg="cleaning up after shim disconnected" id=d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d namespace=k8s.io Mar 12 05:06:24.065046 containerd[1498]: time="2026-03-12T05:06:24.064921133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:24.073254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03-rootfs.mount: Deactivated successfully. Mar 12 05:06:24.080137 containerd[1498]: time="2026-03-12T05:06:24.080041646Z" level=info msg="shim disconnected" id=7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03 namespace=k8s.io Mar 12 05:06:24.080137 containerd[1498]: time="2026-03-12T05:06:24.080135325Z" level=warning msg="cleaning up after shim disconnected" id=7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03 namespace=k8s.io Mar 12 05:06:24.081162 containerd[1498]: time="2026-03-12T05:06:24.080170724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:24.096174 containerd[1498]: time="2026-03-12T05:06:24.096098282Z" level=info msg="StopContainer for \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\" returns successfully" Mar 12 05:06:24.100358 containerd[1498]: time="2026-03-12T05:06:24.100314224Z" level=info msg="StopPodSandbox for \"fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79\"" Mar 12 05:06:24.100481 containerd[1498]: time="2026-03-12T05:06:24.100377335Z" level=info msg="Container to stop \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 05:06:24.106374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79-shm.mount: Deactivated successfully. Mar 12 05:06:24.118838 systemd[1]: cri-containerd-fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79.scope: Deactivated successfully. Mar 12 05:06:24.123700 containerd[1498]: time="2026-03-12T05:06:24.123630883Z" level=info msg="StopContainer for \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\" returns successfully" Mar 12 05:06:24.125374 containerd[1498]: time="2026-03-12T05:06:24.125257550Z" level=info msg="StopPodSandbox for \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\"" Mar 12 05:06:24.125945 containerd[1498]: time="2026-03-12T05:06:24.125695024Z" level=info msg="Container to stop \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 05:06:24.125945 containerd[1498]: time="2026-03-12T05:06:24.125741347Z" level=info msg="Container to stop \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 05:06:24.125945 containerd[1498]: time="2026-03-12T05:06:24.125770415Z" level=info msg="Container to stop \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 05:06:24.125945 containerd[1498]: time="2026-03-12T05:06:24.125791494Z" level=info msg="Container to stop \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 05:06:24.125945 containerd[1498]: time="2026-03-12T05:06:24.125808064Z" level=info msg="Container to stop \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 12 05:06:24.132729 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc-shm.mount: Deactivated successfully. Mar 12 05:06:24.150366 systemd[1]: cri-containerd-d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc.scope: Deactivated successfully. Mar 12 05:06:24.175737 containerd[1498]: time="2026-03-12T05:06:24.175520472Z" level=info msg="shim disconnected" id=fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79 namespace=k8s.io Mar 12 05:06:24.175737 containerd[1498]: time="2026-03-12T05:06:24.175587713Z" level=warning msg="cleaning up after shim disconnected" id=fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79 namespace=k8s.io Mar 12 05:06:24.175737 containerd[1498]: time="2026-03-12T05:06:24.175603681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:24.193601 containerd[1498]: time="2026-03-12T05:06:24.193418291Z" level=info msg="shim disconnected" id=d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc namespace=k8s.io Mar 12 05:06:24.194014 containerd[1498]: time="2026-03-12T05:06:24.193858175Z" level=warning msg="cleaning up after shim disconnected" id=d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc namespace=k8s.io Mar 12 05:06:24.194306 containerd[1498]: time="2026-03-12T05:06:24.193884302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:24.220076 containerd[1498]: time="2026-03-12T05:06:24.220019604Z" level=info msg="TearDown network for sandbox \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" successfully" Mar 12 05:06:24.220404 containerd[1498]: time="2026-03-12T05:06:24.220265124Z" level=info msg="StopPodSandbox for \"d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc\" returns successfully" Mar 12 05:06:24.222165 containerd[1498]: time="2026-03-12T05:06:24.222114506Z" level=info msg="TearDown network for sandbox \"fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79\" successfully" Mar 12 05:06:24.222165 containerd[1498]: time="2026-03-12T05:06:24.222164107Z" level=info msg="StopPodSandbox for \"fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79\" returns successfully" Mar 12 05:06:24.366317 kubelet[2698]: I0312 05:06:24.366130 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-config-path\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.366317 kubelet[2698]: I0312 05:06:24.366314 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlq86\" (UniqueName: \"kubernetes.io/projected/ad39d82d-bb57-4f95-b5c3-1921844cc824-kube-api-access-rlq86\") pod \"ad39d82d-bb57-4f95-b5c3-1921844cc824\" (UID: \"ad39d82d-bb57-4f95-b5c3-1921844cc824\") " Mar 12 05:06:24.367936 kubelet[2698]: I0312 05:06:24.366351 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hostproc\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.367936 kubelet[2698]: I0312 05:06:24.366383 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-bpf-maps\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.367936 kubelet[2698]: I0312 05:06:24.366407 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cni-path\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.367936 kubelet[2698]: I0312 05:06:24.366428 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-etc-cni-netd\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.367936 kubelet[2698]: I0312 05:06:24.366486 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-net\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.367936 kubelet[2698]: I0312 05:06:24.366515 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-clustermesh-secrets\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.368752 kubelet[2698]: I0312 05:06:24.366542 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8fbn\" (UniqueName: \"kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-kube-api-access-h8fbn\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.368752 kubelet[2698]: I0312 05:06:24.366565 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-kernel\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.368752 kubelet[2698]: I0312 05:06:24.366587 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-cgroup\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.368752 kubelet[2698]: I0312 05:06:24.366608 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-lib-modules\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.368752 kubelet[2698]: I0312 05:06:24.366633 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hubble-tls\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.368752 kubelet[2698]: I0312 05:06:24.366659 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-xtables-lock\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.369848 kubelet[2698]: I0312 05:06:24.366681 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-run\") pod \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\" (UID: \"699562fa-7f4b-44d1-b3f5-c7c2a9dca414\") " Mar 12 05:06:24.369848 kubelet[2698]: I0312 05:06:24.366706 2698 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad39d82d-bb57-4f95-b5c3-1921844cc824-cilium-config-path\") pod \"ad39d82d-bb57-4f95-b5c3-1921844cc824\" (UID: \"ad39d82d-bb57-4f95-b5c3-1921844cc824\") " Mar 12 05:06:24.378997 kubelet[2698]: I0312 05:06:24.378306 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.379088 kubelet[2698]: I0312 05:06:24.379069 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.379144 kubelet[2698]: I0312 05:06:24.379126 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.380289 kubelet[2698]: I0312 05:06:24.379943 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.380289 kubelet[2698]: I0312 05:06:24.380009 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.380289 kubelet[2698]: I0312 05:06:24.376209 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hostproc" (OuterVolumeSpecName: "hostproc") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.380289 kubelet[2698]: I0312 05:06:24.380066 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.380289 kubelet[2698]: I0312 05:06:24.380099 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cni-path" (OuterVolumeSpecName: "cni-path") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.380582 kubelet[2698]: I0312 05:06:24.380126 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.380582 kubelet[2698]: I0312 05:06:24.380155 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 12 05:06:24.384220 kubelet[2698]: I0312 05:06:24.384186 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad39d82d-bb57-4f95-b5c3-1921844cc824-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ad39d82d-bb57-4f95-b5c3-1921844cc824" (UID: "ad39d82d-bb57-4f95-b5c3-1921844cc824"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 05:06:24.385232 kubelet[2698]: I0312 05:06:24.385203 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 12 05:06:24.389174 kubelet[2698]: I0312 05:06:24.389070 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 12 05:06:24.389367 kubelet[2698]: I0312 05:06:24.389332 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-kube-api-access-h8fbn" (OuterVolumeSpecName: "kube-api-access-h8fbn") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "kube-api-access-h8fbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 05:06:24.389524 kubelet[2698]: I0312 05:06:24.389400 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad39d82d-bb57-4f95-b5c3-1921844cc824-kube-api-access-rlq86" (OuterVolumeSpecName: "kube-api-access-rlq86") pod "ad39d82d-bb57-4f95-b5c3-1921844cc824" (UID: "ad39d82d-bb57-4f95-b5c3-1921844cc824"). InnerVolumeSpecName "kube-api-access-rlq86". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 05:06:24.389665 kubelet[2698]: I0312 05:06:24.389414 2698 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "699562fa-7f4b-44d1-b3f5-c7c2a9dca414" (UID: "699562fa-7f4b-44d1-b3f5-c7c2a9dca414"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 12 05:06:24.467352 kubelet[2698]: I0312 05:06:24.467201 2698 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-bpf-maps\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467536 2698 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cni-path\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467563 2698 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-etc-cni-netd\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467583 2698 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-net\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467600 2698 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-clustermesh-secrets\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467617 2698 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h8fbn\" (UniqueName: \"kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-kube-api-access-h8fbn\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467633 2698 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-host-proc-sys-kernel\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467648 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-cgroup\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.467803 kubelet[2698]: I0312 05:06:24.467661 2698 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-lib-modules\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.468325 kubelet[2698]: I0312 05:06:24.467675 2698 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hubble-tls\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.468325 kubelet[2698]: I0312 05:06:24.467698 2698 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-xtables-lock\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.468325 kubelet[2698]: I0312 05:06:24.467714 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-run\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.468325 kubelet[2698]: I0312 05:06:24.467727 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ad39d82d-bb57-4f95-b5c3-1921844cc824-cilium-config-path\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.468325 kubelet[2698]: I0312 05:06:24.467741 2698 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-cilium-config-path\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.468325 kubelet[2698]: I0312 05:06:24.467763 2698 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rlq86\" (UniqueName: \"kubernetes.io/projected/ad39d82d-bb57-4f95-b5c3-1921844cc824-kube-api-access-rlq86\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.468325 kubelet[2698]: I0312 05:06:24.467778 2698 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/699562fa-7f4b-44d1-b3f5-c7c2a9dca414-hostproc\") on node \"srv-rxxiv.gb1.brightbox.com\" DevicePath \"\"" Mar 12 05:06:24.878561 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fee22395cd6d165eff56c41dcc0e93c9349b6311dc89f2ec1a44ce494660fe79-rootfs.mount: Deactivated successfully. Mar 12 05:06:24.878740 systemd[1]: var-lib-kubelet-pods-ad39d82d\x2dbb57\x2d4f95\x2db5c3\x2d1921844cc824-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drlq86.mount: Deactivated successfully. Mar 12 05:06:24.878873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d6643e3dbdeb64ac810ab1a1f441c150f0da1f7f01e3953acb25afbdda00cc-rootfs.mount: Deactivated successfully. Mar 12 05:06:24.878996 systemd[1]: var-lib-kubelet-pods-699562fa\x2d7f4b\x2d44d1\x2db3f5\x2dc7c2a9dca414-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh8fbn.mount: Deactivated successfully. Mar 12 05:06:24.879112 systemd[1]: var-lib-kubelet-pods-699562fa\x2d7f4b\x2d44d1\x2db3f5\x2dc7c2a9dca414-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 12 05:06:24.879234 systemd[1]: var-lib-kubelet-pods-699562fa\x2d7f4b\x2d44d1\x2db3f5\x2dc7c2a9dca414-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 12 05:06:25.004484 kubelet[2698]: I0312 05:06:25.002878 2698 scope.go:117] "RemoveContainer" containerID="d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d" Mar 12 05:06:25.007655 containerd[1498]: time="2026-03-12T05:06:25.007614184Z" level=info msg="RemoveContainer for \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\"" Mar 12 05:06:25.016960 containerd[1498]: time="2026-03-12T05:06:25.016909129Z" level=info msg="RemoveContainer for \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\" returns successfully" Mar 12 05:06:25.020176 systemd[1]: Removed slice kubepods-besteffort-podad39d82d_bb57_4f95_b5c3_1921844cc824.slice - libcontainer container kubepods-besteffort-podad39d82d_bb57_4f95_b5c3_1921844cc824.slice. Mar 12 05:06:25.033704 kubelet[2698]: I0312 05:06:25.033657 2698 scope.go:117] "RemoveContainer" containerID="d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d" Mar 12 05:06:25.036320 systemd[1]: Removed slice kubepods-burstable-pod699562fa_7f4b_44d1_b3f5_c7c2a9dca414.slice - libcontainer container kubepods-burstable-pod699562fa_7f4b_44d1_b3f5_c7c2a9dca414.slice. Mar 12 05:06:25.036485 systemd[1]: kubepods-burstable-pod699562fa_7f4b_44d1_b3f5_c7c2a9dca414.slice: Consumed 10.866s CPU time. Mar 12 05:06:25.045963 containerd[1498]: time="2026-03-12T05:06:25.036362812Z" level=error msg="ContainerStatus for \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\": not found" Mar 12 05:06:25.053498 kubelet[2698]: E0312 05:06:25.053418 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\": not found" containerID="d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d" Mar 12 05:06:25.068165 kubelet[2698]: I0312 05:06:25.053529 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d"} err="failed to get container status \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5f79cd6892cf5ad81526430fd4e8336e53517627720efd5de4c069e3e7cb31d\": not found" Mar 12 05:06:25.068165 kubelet[2698]: I0312 05:06:25.068011 2698 scope.go:117] "RemoveContainer" containerID="7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03" Mar 12 05:06:25.071767 containerd[1498]: time="2026-03-12T05:06:25.071403029Z" level=info msg="RemoveContainer for \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\"" Mar 12 05:06:25.077349 containerd[1498]: time="2026-03-12T05:06:25.077314502Z" level=info msg="RemoveContainer for \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\" returns successfully" Mar 12 05:06:25.077572 kubelet[2698]: I0312 05:06:25.077528 2698 scope.go:117] "RemoveContainer" containerID="939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee" Mar 12 05:06:25.079069 containerd[1498]: time="2026-03-12T05:06:25.079031474Z" level=info msg="RemoveContainer for \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\"" Mar 12 05:06:25.082224 containerd[1498]: time="2026-03-12T05:06:25.082108944Z" level=info msg="RemoveContainer for \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\" returns successfully" Mar 12 05:06:25.082589 kubelet[2698]: I0312 05:06:25.082431 2698 scope.go:117] "RemoveContainer" containerID="d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3" Mar 12 05:06:25.084128 containerd[1498]: time="2026-03-12T05:06:25.083868552Z" level=info msg="RemoveContainer for \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\"" Mar 12 05:06:25.086689 containerd[1498]: time="2026-03-12T05:06:25.086652421Z" level=info msg="RemoveContainer for \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\" returns successfully" Mar 12 05:06:25.086958 kubelet[2698]: I0312 05:06:25.086927 2698 scope.go:117] "RemoveContainer" containerID="b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643" Mar 12 05:06:25.088518 containerd[1498]: time="2026-03-12T05:06:25.088487769Z" level=info msg="RemoveContainer for \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\"" Mar 12 05:06:25.091258 containerd[1498]: time="2026-03-12T05:06:25.091227524Z" level=info msg="RemoveContainer for \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\" returns successfully" Mar 12 05:06:25.091467 kubelet[2698]: I0312 05:06:25.091412 2698 scope.go:117] "RemoveContainer" containerID="6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f" Mar 12 05:06:25.093111 containerd[1498]: time="2026-03-12T05:06:25.093083374Z" level=info msg="RemoveContainer for \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\"" Mar 12 05:06:25.096765 containerd[1498]: time="2026-03-12T05:06:25.096684407Z" level=info msg="RemoveContainer for \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\" returns successfully" Mar 12 05:06:25.097154 kubelet[2698]: I0312 05:06:25.097001 2698 scope.go:117] "RemoveContainer" containerID="7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03" Mar 12 05:06:25.098006 containerd[1498]: time="2026-03-12T05:06:25.097885304Z" level=error msg="ContainerStatus for \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\": not found" Mar 12 05:06:25.098096 kubelet[2698]: E0312 05:06:25.098074 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\": not found" containerID="7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03" Mar 12 05:06:25.098183 kubelet[2698]: I0312 05:06:25.098118 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03"} err="failed to get container status \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f0924eb83b9331f8c6b0eee697b90c7d73dcd2d44fd3e004102ec1db1dbfe03\": not found" Mar 12 05:06:25.098183 kubelet[2698]: I0312 05:06:25.098144 2698 scope.go:117] "RemoveContainer" containerID="939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee" Mar 12 05:06:25.099207 containerd[1498]: time="2026-03-12T05:06:25.098892602Z" level=error msg="ContainerStatus for \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\": not found" Mar 12 05:06:25.100054 kubelet[2698]: E0312 05:06:25.099300 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\": not found" containerID="939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee" Mar 12 05:06:25.100054 kubelet[2698]: I0312 05:06:25.099333 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee"} err="failed to get container status \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"939d69ac705331e0f94fe19f7e006cc6b52c2925fcb4b90a9bee1d79b64231ee\": not found" Mar 12 05:06:25.100054 kubelet[2698]: I0312 05:06:25.099354 2698 scope.go:117] "RemoveContainer" containerID="d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3" Mar 12 05:06:25.100227 containerd[1498]: time="2026-03-12T05:06:25.100045819Z" level=error msg="ContainerStatus for \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\": not found" Mar 12 05:06:25.100370 kubelet[2698]: E0312 05:06:25.100291 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\": not found" containerID="d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3" Mar 12 05:06:25.100434 kubelet[2698]: I0312 05:06:25.100370 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3"} err="failed to get container status \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d712963d39b470fc3abe999d078acd17bb8d4512a68758b68a532b53a67562a3\": not found" Mar 12 05:06:25.100528 kubelet[2698]: I0312 05:06:25.100504 2698 scope.go:117] "RemoveContainer" containerID="b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643" Mar 12 05:06:25.100994 containerd[1498]: time="2026-03-12T05:06:25.100923456Z" level=error msg="ContainerStatus for \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\": not found" Mar 12 05:06:25.101394 kubelet[2698]: E0312 05:06:25.101195 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\": not found" containerID="b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643" Mar 12 05:06:25.101394 kubelet[2698]: I0312 05:06:25.101258 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643"} err="failed to get container status \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\": rpc error: code = NotFound desc = an error occurred when try to find container \"b97df8391ea575048e3650dad20b69b6749ac29803c2dad395d50d0dc11f5643\": not found" Mar 12 05:06:25.101394 kubelet[2698]: I0312 05:06:25.101296 2698 scope.go:117] "RemoveContainer" containerID="6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f" Mar 12 05:06:25.101581 containerd[1498]: time="2026-03-12T05:06:25.101521449Z" level=error msg="ContainerStatus for \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\": not found" Mar 12 05:06:25.101835 kubelet[2698]: E0312 05:06:25.101744 2698 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\": not found" containerID="6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f" Mar 12 05:06:25.101835 kubelet[2698]: I0312 05:06:25.101775 2698 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f"} err="failed to get container status \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6cebf3ba7e10f78833e9cff839f1aa28e1c6b5de9e4d596a00c46118fc513e4f\": not found" Mar 12 05:06:25.529606 kubelet[2698]: I0312 05:06:25.529519 2698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699562fa-7f4b-44d1-b3f5-c7c2a9dca414" path="/var/lib/kubelet/pods/699562fa-7f4b-44d1-b3f5-c7c2a9dca414/volumes" Mar 12 05:06:25.530935 kubelet[2698]: I0312 05:06:25.530861 2698 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad39d82d-bb57-4f95-b5c3-1921844cc824" path="/var/lib/kubelet/pods/ad39d82d-bb57-4f95-b5c3-1921844cc824/volumes" Mar 12 05:06:25.648874 kubelet[2698]: E0312 05:06:25.648802 2698 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 05:06:25.793829 sshd[4305]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:25.798213 systemd[1]: sshd@26-10.230.35.22:22-20.161.92.111:51930.service: Deactivated successfully. Mar 12 05:06:25.801588 systemd[1]: session-27.scope: Deactivated successfully. Mar 12 05:06:25.801911 systemd[1]: session-27.scope: Consumed 1.310s CPU time. Mar 12 05:06:25.803850 systemd-logind[1479]: Session 27 logged out. Waiting for processes to exit. Mar 12 05:06:25.805645 systemd-logind[1479]: Removed session 27. Mar 12 05:06:25.914390 systemd[1]: Started sshd@27-10.230.35.22:22-20.161.92.111:51932.service - OpenSSH per-connection server daemon (20.161.92.111:51932). Mar 12 05:06:26.498375 sshd[4472]: Accepted publickey for core from 20.161.92.111 port 51932 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:26.500933 sshd[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:26.510062 systemd-logind[1479]: New session 28 of user core. Mar 12 05:06:26.513700 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 12 05:06:27.691945 sshd[4472]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:27.699584 systemd[1]: Created slice kubepods-burstable-pod3c366b75_9d79_4701_9055_77b2fa3103ba.slice - libcontainer container kubepods-burstable-pod3c366b75_9d79_4701_9055_77b2fa3103ba.slice. Mar 12 05:06:27.706707 systemd-logind[1479]: Session 28 logged out. Waiting for processes to exit. Mar 12 05:06:27.707282 systemd[1]: sshd@27-10.230.35.22:22-20.161.92.111:51932.service: Deactivated successfully. Mar 12 05:06:27.712492 systemd[1]: session-28.scope: Deactivated successfully. Mar 12 05:06:27.716518 systemd-logind[1479]: Removed session 28. Mar 12 05:06:27.790456 kubelet[2698]: I0312 05:06:27.790317 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-hostproc\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.791881 kubelet[2698]: I0312 05:06:27.790485 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3c366b75-9d79-4701-9055-77b2fa3103ba-clustermesh-secrets\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.791881 kubelet[2698]: I0312 05:06:27.790580 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3c366b75-9d79-4701-9055-77b2fa3103ba-cilium-config-path\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.791881 kubelet[2698]: I0312 05:06:27.790637 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-etc-cni-netd\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.791881 kubelet[2698]: I0312 05:06:27.790728 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-cilium-run\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.791881 kubelet[2698]: I0312 05:06:27.790836 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-host-proc-sys-net\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792155 kubelet[2698]: I0312 05:06:27.790908 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-host-proc-sys-kernel\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792155 kubelet[2698]: I0312 05:06:27.790977 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-cilium-cgroup\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792155 kubelet[2698]: I0312 05:06:27.791058 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3c366b75-9d79-4701-9055-77b2fa3103ba-hubble-tls\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792155 kubelet[2698]: I0312 05:06:27.791208 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-bpf-maps\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792155 kubelet[2698]: I0312 05:06:27.791309 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-cni-path\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792155 kubelet[2698]: I0312 05:06:27.791367 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3c366b75-9d79-4701-9055-77b2fa3103ba-cilium-ipsec-secrets\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792433 kubelet[2698]: I0312 05:06:27.791408 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-lib-modules\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792433 kubelet[2698]: I0312 05:06:27.791433 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c366b75-9d79-4701-9055-77b2fa3103ba-xtables-lock\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.792433 kubelet[2698]: I0312 05:06:27.791476 2698 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pml6n\" (UniqueName: \"kubernetes.io/projected/3c366b75-9d79-4701-9055-77b2fa3103ba-kube-api-access-pml6n\") pod \"cilium-9t4ts\" (UID: \"3c366b75-9d79-4701-9055-77b2fa3103ba\") " pod="kube-system/cilium-9t4ts" Mar 12 05:06:27.802878 systemd[1]: Started sshd@28-10.230.35.22:22-20.161.92.111:51934.service - OpenSSH per-connection server daemon (20.161.92.111:51934). Mar 12 05:06:27.835885 kubelet[2698]: I0312 05:06:27.835785 2698 setters.go:543] "Node became not ready" node="srv-rxxiv.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-12T05:06:27Z","lastTransitionTime":"2026-03-12T05:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 12 05:06:28.032839 containerd[1498]: time="2026-03-12T05:06:28.032010070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9t4ts,Uid:3c366b75-9d79-4701-9055-77b2fa3103ba,Namespace:kube-system,Attempt:0,}" Mar 12 05:06:28.065095 containerd[1498]: time="2026-03-12T05:06:28.064941870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 12 05:06:28.065626 containerd[1498]: time="2026-03-12T05:06:28.065172444Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 12 05:06:28.065626 containerd[1498]: time="2026-03-12T05:06:28.065225198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:06:28.065626 containerd[1498]: time="2026-03-12T05:06:28.065406490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 12 05:06:28.099676 systemd[1]: Started cri-containerd-a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f.scope - libcontainer container a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f. Mar 12 05:06:28.144826 containerd[1498]: time="2026-03-12T05:06:28.144692087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9t4ts,Uid:3c366b75-9d79-4701-9055-77b2fa3103ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\"" Mar 12 05:06:28.153937 containerd[1498]: time="2026-03-12T05:06:28.153811682Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 12 05:06:28.165904 containerd[1498]: time="2026-03-12T05:06:28.165784017Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe\"" Mar 12 05:06:28.167137 containerd[1498]: time="2026-03-12T05:06:28.166485197Z" level=info msg="StartContainer for \"069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe\"" Mar 12 05:06:28.204651 systemd[1]: Started cri-containerd-069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe.scope - libcontainer container 069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe. Mar 12 05:06:28.250852 containerd[1498]: time="2026-03-12T05:06:28.250790772Z" level=info msg="StartContainer for \"069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe\" returns successfully" Mar 12 05:06:28.272416 systemd[1]: cri-containerd-069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe.scope: Deactivated successfully. Mar 12 05:06:28.325524 containerd[1498]: time="2026-03-12T05:06:28.324597846Z" level=info msg="shim disconnected" id=069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe namespace=k8s.io Mar 12 05:06:28.325524 containerd[1498]: time="2026-03-12T05:06:28.324687886Z" level=warning msg="cleaning up after shim disconnected" id=069596fa4ef0f1019aba40267d540bf5298415e9f9c8fb1c4afac8cba9c710fe namespace=k8s.io Mar 12 05:06:28.325524 containerd[1498]: time="2026-03-12T05:06:28.324705214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:28.378477 sshd[4483]: Accepted publickey for core from 20.161.92.111 port 51934 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:28.380522 sshd[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:28.390933 systemd-logind[1479]: New session 29 of user core. Mar 12 05:06:28.400670 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 12 05:06:28.772217 sshd[4483]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:28.777774 systemd[1]: sshd@28-10.230.35.22:22-20.161.92.111:51934.service: Deactivated successfully. Mar 12 05:06:28.782106 systemd[1]: session-29.scope: Deactivated successfully. Mar 12 05:06:28.784096 systemd-logind[1479]: Session 29 logged out. Waiting for processes to exit. Mar 12 05:06:28.785699 systemd-logind[1479]: Removed session 29. Mar 12 05:06:28.873779 systemd[1]: Started sshd@29-10.230.35.22:22-20.161.92.111:51942.service - OpenSSH per-connection server daemon (20.161.92.111:51942). Mar 12 05:06:29.036030 containerd[1498]: time="2026-03-12T05:06:29.035878278Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 12 05:06:29.052382 containerd[1498]: time="2026-03-12T05:06:29.052323069Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401\"" Mar 12 05:06:29.056160 containerd[1498]: time="2026-03-12T05:06:29.056127526Z" level=info msg="StartContainer for \"8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401\"" Mar 12 05:06:29.110526 systemd[1]: Started cri-containerd-8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401.scope - libcontainer container 8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401. Mar 12 05:06:29.153405 containerd[1498]: time="2026-03-12T05:06:29.153322334Z" level=info msg="StartContainer for \"8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401\" returns successfully" Mar 12 05:06:29.167900 systemd[1]: cri-containerd-8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401.scope: Deactivated successfully. Mar 12 05:06:29.199779 containerd[1498]: time="2026-03-12T05:06:29.199626314Z" level=info msg="shim disconnected" id=8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401 namespace=k8s.io Mar 12 05:06:29.200130 containerd[1498]: time="2026-03-12T05:06:29.200011286Z" level=warning msg="cleaning up after shim disconnected" id=8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401 namespace=k8s.io Mar 12 05:06:29.200130 containerd[1498]: time="2026-03-12T05:06:29.200036064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:29.430974 sshd[4598]: Accepted publickey for core from 20.161.92.111 port 51942 ssh2: RSA SHA256:Og1sBJQhpCaSrAUaqgWUKLRz71/5xOULak8g1URRdac Mar 12 05:06:29.433987 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 12 05:06:29.441545 systemd-logind[1479]: New session 30 of user core. Mar 12 05:06:29.451809 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 12 05:06:29.909041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b947fce619a84923731bc18880473915846d98bb398e99f72705decfa8a1401-rootfs.mount: Deactivated successfully. Mar 12 05:06:30.040425 containerd[1498]: time="2026-03-12T05:06:30.039007971Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 12 05:06:30.061123 containerd[1498]: time="2026-03-12T05:06:30.061066840Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78\"" Mar 12 05:06:30.062376 containerd[1498]: time="2026-03-12T05:06:30.062341585Z" level=info msg="StartContainer for \"3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78\"" Mar 12 05:06:30.110641 systemd[1]: Started cri-containerd-3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78.scope - libcontainer container 3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78. Mar 12 05:06:30.158353 containerd[1498]: time="2026-03-12T05:06:30.158298856Z" level=info msg="StartContainer for \"3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78\" returns successfully" Mar 12 05:06:30.168679 systemd[1]: cri-containerd-3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78.scope: Deactivated successfully. Mar 12 05:06:30.200033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78-rootfs.mount: Deactivated successfully. Mar 12 05:06:30.203385 containerd[1498]: time="2026-03-12T05:06:30.203251277Z" level=info msg="shim disconnected" id=3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78 namespace=k8s.io Mar 12 05:06:30.203803 containerd[1498]: time="2026-03-12T05:06:30.203353860Z" level=warning msg="cleaning up after shim disconnected" id=3cff27804afba179294b417a9772d8357527a374639dd81fe4e47de09786bf78 namespace=k8s.io Mar 12 05:06:30.203803 containerd[1498]: time="2026-03-12T05:06:30.203523654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:30.650850 kubelet[2698]: E0312 05:06:30.650778 2698 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 12 05:06:31.047154 containerd[1498]: time="2026-03-12T05:06:31.044852073Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 12 05:06:31.065415 containerd[1498]: time="2026-03-12T05:06:31.065274982Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa\"" Mar 12 05:06:31.067369 containerd[1498]: time="2026-03-12T05:06:31.067330643Z" level=info msg="StartContainer for \"518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa\"" Mar 12 05:06:31.122646 systemd[1]: Started cri-containerd-518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa.scope - libcontainer container 518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa. Mar 12 05:06:31.165252 systemd[1]: cri-containerd-518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa.scope: Deactivated successfully. Mar 12 05:06:31.167810 containerd[1498]: time="2026-03-12T05:06:31.167427796Z" level=info msg="StartContainer for \"518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa\" returns successfully" Mar 12 05:06:31.211783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa-rootfs.mount: Deactivated successfully. Mar 12 05:06:31.216019 containerd[1498]: time="2026-03-12T05:06:31.215712600Z" level=info msg="shim disconnected" id=518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa namespace=k8s.io Mar 12 05:06:31.216019 containerd[1498]: time="2026-03-12T05:06:31.215789774Z" level=warning msg="cleaning up after shim disconnected" id=518200977a9ff8dbecd1665e72da8e75895ada6f78bb20e2522ed2cf9b0593aa namespace=k8s.io Mar 12 05:06:31.216019 containerd[1498]: time="2026-03-12T05:06:31.215805552Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 12 05:06:32.049267 containerd[1498]: time="2026-03-12T05:06:32.049204801Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 12 05:06:32.076248 containerd[1498]: time="2026-03-12T05:06:32.075772821Z" level=info msg="CreateContainer within sandbox \"a7cb3dcc833836867ebb8af1aabd45fd1f6ff1d324fa66b7b2061f53f81f037f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"644788cd35a28ca8d9f6ca884f90e95f9af942a70a7bcb1df56bfcbeacb9ce15\"" Mar 12 05:06:32.077822 containerd[1498]: time="2026-03-12T05:06:32.076614396Z" level=info msg="StartContainer for \"644788cd35a28ca8d9f6ca884f90e95f9af942a70a7bcb1df56bfcbeacb9ce15\"" Mar 12 05:06:32.114563 systemd[1]: run-containerd-runc-k8s.io-644788cd35a28ca8d9f6ca884f90e95f9af942a70a7bcb1df56bfcbeacb9ce15-runc.4RlZOu.mount: Deactivated successfully. Mar 12 05:06:32.122638 systemd[1]: Started cri-containerd-644788cd35a28ca8d9f6ca884f90e95f9af942a70a7bcb1df56bfcbeacb9ce15.scope - libcontainer container 644788cd35a28ca8d9f6ca884f90e95f9af942a70a7bcb1df56bfcbeacb9ce15. Mar 12 05:06:32.174150 containerd[1498]: time="2026-03-12T05:06:32.174081001Z" level=info msg="StartContainer for \"644788cd35a28ca8d9f6ca884f90e95f9af942a70a7bcb1df56bfcbeacb9ce15\" returns successfully" Mar 12 05:06:32.889583 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 12 05:06:36.711706 systemd-networkd[1429]: lxc_health: Link UP Mar 12 05:06:36.717731 systemd-networkd[1429]: lxc_health: Gained carrier Mar 12 05:06:38.083361 kubelet[2698]: I0312 05:06:38.083229 2698 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9t4ts" podStartSLOduration=11.083183172 podStartE2EDuration="11.083183172s" podCreationTimestamp="2026-03-12 05:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-12 05:06:33.117711983 +0000 UTC m=+137.882826761" watchObservedRunningTime="2026-03-12 05:06:38.083183172 +0000 UTC m=+142.848297955" Mar 12 05:06:38.432797 systemd-networkd[1429]: lxc_health: Gained IPv6LL Mar 12 05:06:38.725643 systemd[1]: run-containerd-runc-k8s.io-644788cd35a28ca8d9f6ca884f90e95f9af942a70a7bcb1df56bfcbeacb9ce15-runc.rhq0MA.mount: Deactivated successfully. Mar 12 05:06:43.356195 sshd[4598]: pam_unix(sshd:session): session closed for user core Mar 12 05:06:43.362228 systemd-logind[1479]: Session 30 logged out. Waiting for processes to exit. Mar 12 05:06:43.363141 systemd[1]: sshd@29-10.230.35.22:22-20.161.92.111:51942.service: Deactivated successfully. Mar 12 05:06:43.368041 systemd[1]: session-30.scope: Deactivated successfully. Mar 12 05:06:43.372868 systemd-logind[1479]: Removed session 30.