Mar 13 01:12:59.953774 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 01:12:59.953818 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 01:12:59.953833 kernel: BIOS-provided physical RAM map: Mar 13 01:12:59.953843 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 13 01:12:59.953858 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 13 01:12:59.953868 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 01:12:59.953880 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 13 01:12:59.953891 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 13 01:12:59.953901 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 01:12:59.953912 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 01:12:59.953922 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 01:12:59.953933 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 01:12:59.953943 kernel: NX (Execute Disable) protection: active Mar 13 01:12:59.953958 kernel: APIC: Static calls initialized Mar 13 01:12:59.953971 kernel: SMBIOS 2.8 present. Mar 13 01:12:59.953983 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 13 01:12:59.953994 kernel: DMI: Memory slots populated: 1/1 Mar 13 01:12:59.954005 kernel: Hypervisor detected: KVM Mar 13 01:12:59.954017 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 13 01:12:59.954033 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 01:12:59.954044 kernel: kvm-clock: using sched offset of 5875284015 cycles Mar 13 01:12:59.954056 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 01:12:59.954068 kernel: tsc: Detected 2499.998 MHz processor Mar 13 01:12:59.954079 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 01:12:59.954091 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 01:12:59.954103 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 13 01:12:59.954114 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 01:12:59.954125 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 01:12:59.954142 kernel: Using GB pages for direct mapping Mar 13 01:12:59.954153 kernel: ACPI: Early table checksum verification disabled Mar 13 01:12:59.954165 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 13 01:12:59.954176 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 01:12:59.954188 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 01:12:59.954199 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 01:12:59.954211 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 13 01:12:59.954235 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 01:12:59.954246 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 01:12:59.954261 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 01:12:59.954272 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 01:12:59.954322 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 13 01:12:59.954340 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 13 01:12:59.954352 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 13 01:12:59.954363 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 13 01:12:59.954390 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 13 01:12:59.954402 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 13 01:12:59.954414 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 13 01:12:59.954425 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 13 01:12:59.954437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 13 01:12:59.954449 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 13 01:12:59.954473 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Mar 13 01:12:59.954485 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Mar 13 01:12:59.954503 kernel: Zone ranges: Mar 13 01:12:59.954515 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 01:12:59.954527 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 13 01:12:59.954538 kernel: Normal empty Mar 13 01:12:59.954550 kernel: Device empty Mar 13 01:12:59.954562 kernel: Movable zone start for each node Mar 13 01:12:59.954574 kernel: Early memory node ranges Mar 13 01:12:59.954586 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 01:12:59.954597 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 13 01:12:59.954611 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 13 01:12:59.954627 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 01:12:59.954639 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 01:12:59.954651 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 13 01:12:59.954663 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 01:12:59.954675 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 01:12:59.954687 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 01:12:59.954699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 01:12:59.954711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 01:12:59.954723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 01:12:59.954759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 01:12:59.954771 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 01:12:59.954783 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 01:12:59.954795 kernel: TSC deadline timer available Mar 13 01:12:59.954807 kernel: CPU topo: Max. logical packages: 16 Mar 13 01:12:59.954819 kernel: CPU topo: Max. logical dies: 16 Mar 13 01:12:59.954830 kernel: CPU topo: Max. dies per package: 1 Mar 13 01:12:59.954842 kernel: CPU topo: Max. threads per core: 1 Mar 13 01:12:59.954854 kernel: CPU topo: Num. cores per package: 1 Mar 13 01:12:59.954870 kernel: CPU topo: Num. threads per package: 1 Mar 13 01:12:59.954882 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Mar 13 01:12:59.954894 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 01:12:59.954906 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 01:12:59.954918 kernel: Booting paravirtualized kernel on KVM Mar 13 01:12:59.954930 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 01:12:59.954942 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 13 01:12:59.954954 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Mar 13 01:12:59.954966 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Mar 13 01:12:59.954982 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 13 01:12:59.954994 kernel: kvm-guest: PV spinlocks enabled Mar 13 01:12:59.955006 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 01:12:59.955020 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 01:12:59.955045 kernel: random: crng init done Mar 13 01:12:59.955056 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 01:12:59.955068 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 13 01:12:59.955089 kernel: Fallback order for Node 0: 0 Mar 13 01:12:59.955115 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Mar 13 01:12:59.955127 kernel: Policy zone: DMA32 Mar 13 01:12:59.955151 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 01:12:59.955166 kernel: software IO TLB: area num 16. Mar 13 01:12:59.955184 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 13 01:12:59.955197 kernel: Kernel/User page tables isolation: enabled Mar 13 01:12:59.955209 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 01:12:59.955221 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 01:12:59.955233 kernel: Dynamic Preempt: voluntary Mar 13 01:12:59.955251 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 01:12:59.955264 kernel: rcu: RCU event tracing is enabled. Mar 13 01:12:59.955276 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 13 01:12:59.955302 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 01:12:59.955318 kernel: Rude variant of Tasks RCU enabled. Mar 13 01:12:59.955330 kernel: Tracing variant of Tasks RCU enabled. Mar 13 01:12:59.955342 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 01:12:59.955354 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 13 01:12:59.955366 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 13 01:12:59.955378 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 13 01:12:59.955397 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 13 01:12:59.955409 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 13 01:12:59.955421 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 01:12:59.955444 kernel: Console: colour VGA+ 80x25 Mar 13 01:12:59.955461 kernel: printk: legacy console [tty0] enabled Mar 13 01:12:59.955473 kernel: printk: legacy console [ttyS0] enabled Mar 13 01:12:59.955486 kernel: ACPI: Core revision 20240827 Mar 13 01:12:59.955498 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 01:12:59.955511 kernel: x2apic enabled Mar 13 01:12:59.955523 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 01:12:59.955536 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 13 01:12:59.955554 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Mar 13 01:12:59.955566 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 01:12:59.955579 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 13 01:12:59.955591 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 13 01:12:59.955604 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 01:12:59.955620 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 01:12:59.955633 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 01:12:59.955646 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 13 01:12:59.955658 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 01:12:59.955670 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 01:12:59.955683 kernel: MDS: Mitigation: Clear CPU buffers Mar 13 01:12:59.955695 kernel: MMIO Stale Data: Unknown: No mitigations Mar 13 01:12:59.955707 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 13 01:12:59.955719 kernel: active return thunk: its_return_thunk Mar 13 01:12:59.955731 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 01:12:59.955755 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 01:12:59.955773 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 01:12:59.955786 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 01:12:59.955798 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 01:12:59.955811 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 13 01:12:59.955823 kernel: Freeing SMP alternatives memory: 32K Mar 13 01:12:59.955835 kernel: pid_max: default: 32768 minimum: 301 Mar 13 01:12:59.955847 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 01:12:59.955860 kernel: landlock: Up and running. Mar 13 01:12:59.955872 kernel: SELinux: Initializing. Mar 13 01:12:59.955884 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 13 01:12:59.955897 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 13 01:12:59.955909 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 13 01:12:59.955927 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 13 01:12:59.955939 kernel: signal: max sigframe size: 1776 Mar 13 01:12:59.955952 kernel: rcu: Hierarchical SRCU implementation. Mar 13 01:12:59.955965 kernel: rcu: Max phase no-delay instances is 400. Mar 13 01:12:59.955978 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Mar 13 01:12:59.955990 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 01:12:59.956003 kernel: smp: Bringing up secondary CPUs ... Mar 13 01:12:59.956015 kernel: smpboot: x86: Booting SMP configuration: Mar 13 01:12:59.956028 kernel: .... node #0, CPUs: #1 Mar 13 01:12:59.956045 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 01:12:59.956058 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Mar 13 01:12:59.956071 kernel: Memory: 1887476K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 203124K reserved, 0K cma-reserved) Mar 13 01:12:59.956084 kernel: devtmpfs: initialized Mar 13 01:12:59.956096 kernel: x86/mm: Memory block size: 128MB Mar 13 01:12:59.956109 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 01:12:59.956122 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 13 01:12:59.956135 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 01:12:59.956150 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 01:12:59.956178 kernel: audit: initializing netlink subsys (disabled) Mar 13 01:12:59.956193 kernel: audit: type=2000 audit(1773364375.508:1): state=initialized audit_enabled=0 res=1 Mar 13 01:12:59.956205 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 01:12:59.956218 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 01:12:59.956230 kernel: cpuidle: using governor menu Mar 13 01:12:59.956243 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 01:12:59.956255 kernel: dca service started, version 1.12.1 Mar 13 01:12:59.956280 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 01:12:59.956295 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 01:12:59.956313 kernel: PCI: Using configuration type 1 for base access Mar 13 01:12:59.956326 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 01:12:59.956339 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 01:12:59.956381 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 01:12:59.956399 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 01:12:59.956412 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 01:12:59.956425 kernel: ACPI: Added _OSI(Module Device) Mar 13 01:12:59.956437 kernel: ACPI: Added _OSI(Processor Device) Mar 13 01:12:59.956456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 01:12:59.956488 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 01:12:59.956501 kernel: ACPI: Interpreter enabled Mar 13 01:12:59.956514 kernel: ACPI: PM: (supports S0 S5) Mar 13 01:12:59.956526 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 01:12:59.956539 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 01:12:59.956552 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 01:12:59.956564 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 01:12:59.956577 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 01:12:59.956891 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 01:12:59.957073 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 13 01:12:59.957246 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 13 01:12:59.957265 kernel: PCI host bridge to bus 0000:00 Mar 13 01:12:59.957472 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 01:12:59.957622 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 01:12:59.957784 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 01:12:59.957955 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 13 01:12:59.958168 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 01:12:59.958331 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 13 01:12:59.958499 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 01:12:59.958689 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 01:12:59.958898 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Mar 13 01:12:59.959070 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Mar 13 01:12:59.959254 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Mar 13 01:12:59.961495 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Mar 13 01:12:59.961660 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 01:12:59.961864 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.962028 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Mar 13 01:12:59.962188 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 13 01:12:59.963141 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 13 01:12:59.963326 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 13 01:12:59.963510 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.963682 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Mar 13 01:12:59.963855 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 13 01:12:59.964013 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 13 01:12:59.964169 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 13 01:12:59.965651 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.965836 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Mar 13 01:12:59.965997 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 13 01:12:59.966155 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 13 01:12:59.968653 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 13 01:12:59.968848 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.969010 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Mar 13 01:12:59.969177 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 13 01:12:59.969400 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 13 01:12:59.969560 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 13 01:12:59.969728 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.969901 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Mar 13 01:12:59.970059 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 13 01:12:59.970215 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 13 01:12:59.971133 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 13 01:12:59.971335 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.971499 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Mar 13 01:12:59.971657 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 13 01:12:59.971829 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 13 01:12:59.971987 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 13 01:12:59.972153 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.974087 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Mar 13 01:12:59.974252 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 13 01:12:59.974471 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 13 01:12:59.974631 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 13 01:12:59.974821 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Mar 13 01:12:59.974982 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Mar 13 01:12:59.975149 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 13 01:12:59.975338 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 13 01:12:59.975498 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 13 01:12:59.975666 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 01:12:59.975840 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Mar 13 01:12:59.975998 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Mar 13 01:12:59.976156 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Mar 13 01:12:59.982743 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Mar 13 01:12:59.982939 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 13 01:12:59.983105 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Mar 13 01:12:59.983282 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Mar 13 01:12:59.983449 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Mar 13 01:12:59.983617 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 01:12:59.983791 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 01:12:59.983962 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 01:12:59.984130 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Mar 13 01:12:59.984406 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Mar 13 01:12:59.984587 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 01:12:59.984761 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 01:12:59.984946 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Mar 13 01:12:59.985114 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Mar 13 01:12:59.985308 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 13 01:12:59.985477 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 13 01:12:59.985640 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 13 01:12:59.985841 kernel: pci_bus 0000:02: extended config space not accessible Mar 13 01:12:59.986027 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Mar 13 01:12:59.986202 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Mar 13 01:12:59.986399 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 13 01:12:59.986584 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Mar 13 01:12:59.986764 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Mar 13 01:12:59.986931 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 13 01:12:59.987114 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Mar 13 01:12:59.987329 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Mar 13 01:12:59.987495 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 13 01:12:59.987665 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 13 01:12:59.987843 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 13 01:12:59.988011 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 13 01:12:59.988174 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 13 01:12:59.989386 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 13 01:12:59.989410 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 01:12:59.989424 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 01:12:59.989437 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 01:12:59.989458 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 01:12:59.989471 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 01:12:59.989484 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 01:12:59.989497 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 01:12:59.989509 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 01:12:59.989522 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 01:12:59.989535 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 01:12:59.989548 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 01:12:59.989560 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 01:12:59.989578 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 01:12:59.989604 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 01:12:59.989618 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 01:12:59.989631 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 01:12:59.989648 kernel: iommu: Default domain type: Translated Mar 13 01:12:59.989670 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 01:12:59.989683 kernel: PCI: Using ACPI for IRQ routing Mar 13 01:12:59.989696 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 01:12:59.989708 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 13 01:12:59.989727 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 13 01:12:59.989902 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 01:12:59.990064 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 01:12:59.990224 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 01:12:59.990244 kernel: vgaarb: loaded Mar 13 01:12:59.990257 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 01:12:59.993302 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 01:12:59.993320 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 01:12:59.993341 kernel: pnp: PnP ACPI init Mar 13 01:12:59.993523 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 01:12:59.993544 kernel: pnp: PnP ACPI: found 5 devices Mar 13 01:12:59.993558 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 01:12:59.993571 kernel: NET: Registered PF_INET protocol family Mar 13 01:12:59.993584 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 01:12:59.993597 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 13 01:12:59.993610 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 01:12:59.993629 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 01:12:59.993643 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 13 01:12:59.993656 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 13 01:12:59.993668 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 13 01:12:59.993681 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 13 01:12:59.993694 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 01:12:59.993707 kernel: NET: Registered PF_XDP protocol family Mar 13 01:12:59.993885 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 13 01:12:59.994052 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 13 01:12:59.994222 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 13 01:12:59.994407 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 13 01:12:59.994570 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 13 01:12:59.994743 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 13 01:12:59.994907 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 13 01:12:59.995070 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 13 01:12:59.995231 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Mar 13 01:12:59.995410 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Mar 13 01:12:59.995581 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Mar 13 01:12:59.995757 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Mar 13 01:12:59.995920 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Mar 13 01:12:59.996081 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Mar 13 01:12:59.996240 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Mar 13 01:12:59.997468 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Mar 13 01:12:59.997667 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 13 01:12:59.998001 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 13 01:12:59.998208 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 13 01:12:59.999472 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 13 01:12:59.999649 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 13 01:12:59.999831 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 13 01:12:59.999993 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 13 01:13:00.000153 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 13 01:13:00.001349 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 13 01:13:00.001519 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 13 01:13:00.002418 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 13 01:13:00.002592 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 13 01:13:00.002882 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 13 01:13:00.003060 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 13 01:13:00.003222 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 13 01:13:00.004439 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 13 01:13:00.004605 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 13 01:13:00.004790 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 13 01:13:00.004962 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 13 01:13:00.005122 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 13 01:13:00.006320 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 13 01:13:00.006497 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 13 01:13:00.006684 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 13 01:13:00.006863 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 13 01:13:00.007024 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 13 01:13:00.007183 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 13 01:13:00.007915 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 13 01:13:00.008082 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 13 01:13:00.008243 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 13 01:13:00.008496 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 13 01:13:00.008682 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 13 01:13:00.008894 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 13 01:13:00.009054 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 13 01:13:00.009213 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 13 01:13:00.009388 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 01:13:00.009534 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 01:13:00.009678 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 01:13:00.009835 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 13 01:13:00.009994 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 01:13:00.010139 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 13 01:13:00.010329 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 13 01:13:00.010483 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 13 01:13:00.010631 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 13 01:13:00.010809 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 13 01:13:00.010969 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 13 01:13:00.011118 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 13 01:13:00.011294 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 13 01:13:00.011456 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 13 01:13:00.011614 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 13 01:13:00.011779 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 13 01:13:00.011955 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 13 01:13:00.012105 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 13 01:13:00.012532 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 13 01:13:00.012715 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 13 01:13:00.012879 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 13 01:13:00.013029 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 13 01:13:00.013186 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 13 01:13:00.013925 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 13 01:13:00.014085 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 13 01:13:00.014248 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 13 01:13:00.014432 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 13 01:13:00.014619 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 13 01:13:00.014799 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 13 01:13:00.014953 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 13 01:13:00.015104 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 13 01:13:00.015126 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 01:13:00.015140 kernel: PCI: CLS 0 bytes, default 64 Mar 13 01:13:00.015161 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 01:13:00.015175 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 13 01:13:00.015193 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 13 01:13:00.015207 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Mar 13 01:13:00.015221 kernel: Initialise system trusted keyrings Mar 13 01:13:00.015234 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 13 01:13:00.015247 kernel: Key type asymmetric registered Mar 13 01:13:00.015261 kernel: Asymmetric key parser 'x509' registered Mar 13 01:13:00.015318 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 01:13:00.015333 kernel: io scheduler mq-deadline registered Mar 13 01:13:00.015346 kernel: io scheduler kyber registered Mar 13 01:13:00.015359 kernel: io scheduler bfq registered Mar 13 01:13:00.015528 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 13 01:13:00.015727 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 13 01:13:00.015909 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.016080 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 13 01:13:00.016250 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 13 01:13:00.016786 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.016950 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 13 01:13:00.017110 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 13 01:13:00.017286 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.017452 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 13 01:13:00.017619 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 13 01:13:00.017791 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.017951 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 13 01:13:00.018109 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 13 01:13:00.018316 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.018482 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 13 01:13:00.018648 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 13 01:13:00.018823 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.018982 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 13 01:13:00.019139 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 13 01:13:00.019314 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.019473 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 13 01:13:00.019640 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 13 01:13:00.019812 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 13 01:13:00.019834 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 01:13:00.019849 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 01:13:00.019862 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 01:13:00.019875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 01:13:00.019889 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 01:13:00.019909 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 01:13:00.019923 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 01:13:00.019937 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 01:13:00.020106 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 13 01:13:00.020129 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 01:13:00.020293 kernel: rtc_cmos 00:03: registered as rtc0 Mar 13 01:13:00.020447 kernel: rtc_cmos 00:03: setting system clock to 2026-03-13T01:12:59 UTC (1773364379) Mar 13 01:13:00.020610 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 13 01:13:00.020637 kernel: intel_pstate: CPU model not supported Mar 13 01:13:00.020651 kernel: NET: Registered PF_INET6 protocol family Mar 13 01:13:00.020665 kernel: Segment Routing with IPv6 Mar 13 01:13:00.020685 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 01:13:00.020699 kernel: NET: Registered PF_PACKET protocol family Mar 13 01:13:00.020712 kernel: Key type dns_resolver registered Mar 13 01:13:00.020725 kernel: IPI shorthand broadcast: enabled Mar 13 01:13:00.020756 kernel: sched_clock: Marking stable (3550005843, 227102378)->(3903276190, -126167969) Mar 13 01:13:00.020770 kernel: registered taskstats version 1 Mar 13 01:13:00.020789 kernel: Loading compiled-in X.509 certificates Mar 13 01:13:00.020803 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 01:13:00.020816 kernel: Demotion targets for Node 0: null Mar 13 01:13:00.020829 kernel: Key type .fscrypt registered Mar 13 01:13:00.020842 kernel: Key type fscrypt-provisioning registered Mar 13 01:13:00.020855 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 01:13:00.020869 kernel: ima: Allocated hash algorithm: sha1 Mar 13 01:13:00.020882 kernel: ima: No architecture policies found Mar 13 01:13:00.020895 kernel: clk: Disabling unused clocks Mar 13 01:13:00.020908 kernel: Warning: unable to open an initial console. Mar 13 01:13:00.020927 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 01:13:00.020940 kernel: Write protecting the kernel read-only data: 40960k Mar 13 01:13:00.020953 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 01:13:00.020967 kernel: Run /init as init process Mar 13 01:13:00.020980 kernel: with arguments: Mar 13 01:13:00.020993 kernel: /init Mar 13 01:13:00.021006 kernel: with environment: Mar 13 01:13:00.021019 kernel: HOME=/ Mar 13 01:13:00.021032 kernel: TERM=linux Mar 13 01:13:00.021058 systemd[1]: Successfully made /usr/ read-only. Mar 13 01:13:00.021077 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 01:13:00.021092 systemd[1]: Detected virtualization kvm. Mar 13 01:13:00.021106 systemd[1]: Detected architecture x86-64. Mar 13 01:13:00.021120 systemd[1]: Running in initrd. Mar 13 01:13:00.021134 systemd[1]: No hostname configured, using default hostname. Mar 13 01:13:00.021149 systemd[1]: Hostname set to . Mar 13 01:13:00.021168 systemd[1]: Initializing machine ID from VM UUID. Mar 13 01:13:00.021183 systemd[1]: Queued start job for default target initrd.target. Mar 13 01:13:00.021197 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 01:13:00.021212 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 01:13:00.021226 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 01:13:00.021241 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 01:13:00.021255 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 01:13:00.021293 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 01:13:00.021309 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 01:13:00.021324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 01:13:00.021338 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 01:13:00.021353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 01:13:00.021367 systemd[1]: Reached target paths.target - Path Units. Mar 13 01:13:00.021381 systemd[1]: Reached target slices.target - Slice Units. Mar 13 01:13:00.021395 systemd[1]: Reached target swap.target - Swaps. Mar 13 01:13:00.021415 systemd[1]: Reached target timers.target - Timer Units. Mar 13 01:13:00.021429 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 01:13:00.021444 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 01:13:00.021458 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 01:13:00.021472 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 01:13:00.021487 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 01:13:00.021501 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 01:13:00.021515 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 01:13:00.021534 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 01:13:00.021549 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 01:13:00.021564 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 01:13:00.021578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 01:13:00.021593 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 01:13:00.021607 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 01:13:00.021621 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 01:13:00.021635 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 01:13:00.021650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 01:13:00.021715 systemd-journald[210]: Collecting audit messages is disabled. Mar 13 01:13:00.021760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 01:13:00.021783 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 01:13:00.021798 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 01:13:00.021813 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 01:13:00.021827 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 01:13:00.021841 kernel: Bridge firewalling registered Mar 13 01:13:00.021855 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 01:13:00.021875 systemd-journald[210]: Journal started Mar 13 01:13:00.021906 systemd-journald[210]: Runtime Journal (/run/log/journal/4c77ba597a564220972b155e0f6b9136) is 4.7M, max 37.8M, 33.1M free. Mar 13 01:12:59.956518 systemd-modules-load[212]: Inserted module 'overlay' Mar 13 01:13:00.010918 systemd-modules-load[212]: Inserted module 'br_netfilter' Mar 13 01:13:00.083305 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 01:13:00.084119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 01:13:00.085182 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 01:13:00.090463 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 01:13:00.093440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 01:13:00.096458 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 01:13:00.101408 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 01:13:00.122460 systemd-tmpfiles[232]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 01:13:00.124619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 01:13:00.127377 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 01:13:00.131613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 01:13:00.137398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 01:13:00.140045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 01:13:00.144468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 01:13:00.171411 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 01:13:00.194935 systemd-resolved[247]: Positive Trust Anchors: Mar 13 01:13:00.194969 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 01:13:00.195014 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 01:13:00.199334 systemd-resolved[247]: Defaulting to hostname 'linux'. Mar 13 01:13:00.201722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 01:13:00.202516 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 01:13:00.297335 kernel: SCSI subsystem initialized Mar 13 01:13:00.308296 kernel: Loading iSCSI transport class v2.0-870. Mar 13 01:13:00.322288 kernel: iscsi: registered transport (tcp) Mar 13 01:13:00.348775 kernel: iscsi: registered transport (qla4xxx) Mar 13 01:13:00.348820 kernel: QLogic iSCSI HBA Driver Mar 13 01:13:00.375533 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 01:13:00.395458 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 01:13:00.397029 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 01:13:00.460365 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 01:13:00.463435 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 01:13:00.526326 kernel: raid6: sse2x4 gen() 13271 MB/s Mar 13 01:13:00.544305 kernel: raid6: sse2x2 gen() 9046 MB/s Mar 13 01:13:00.563103 kernel: raid6: sse2x1 gen() 9638 MB/s Mar 13 01:13:00.563142 kernel: raid6: using algorithm sse2x4 gen() 13271 MB/s Mar 13 01:13:00.581963 kernel: raid6: .... xor() 7401 MB/s, rmw enabled Mar 13 01:13:00.582019 kernel: raid6: using ssse3x2 recovery algorithm Mar 13 01:13:00.607315 kernel: xor: automatically using best checksumming function avx Mar 13 01:13:00.800319 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 01:13:00.809301 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 01:13:00.812769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 01:13:00.847578 systemd-udevd[459]: Using default interface naming scheme 'v255'. Mar 13 01:13:00.857131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 01:13:00.860931 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 01:13:00.891366 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Mar 13 01:13:00.924973 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 01:13:00.928764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 01:13:01.048538 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 01:13:01.052469 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 01:13:01.160330 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 13 01:13:01.172355 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 13 01:13:01.182287 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 01:13:01.199337 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 01:13:01.204536 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 01:13:01.204570 kernel: GPT:17805311 != 125829119 Mar 13 01:13:01.204598 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 01:13:01.206090 kernel: GPT:17805311 != 125829119 Mar 13 01:13:01.207597 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 01:13:01.209917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 01:13:01.212283 kernel: AES CTR mode by8 optimization enabled Mar 13 01:13:01.234762 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 01:13:01.236139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 01:13:01.237829 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 01:13:01.247527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 01:13:01.249690 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 01:13:01.292297 kernel: ACPI: bus type USB registered Mar 13 01:13:01.297291 kernel: usbcore: registered new interface driver usbfs Mar 13 01:13:01.297366 kernel: usbcore: registered new interface driver hub Mar 13 01:13:01.297394 kernel: usbcore: registered new device driver usb Mar 13 01:13:01.341499 kernel: libata version 3.00 loaded. Mar 13 01:13:01.340830 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 01:13:01.385624 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 01:13:01.385902 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 01:13:01.385925 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 01:13:01.386121 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 01:13:01.389504 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 01:13:01.389744 kernel: scsi host0: ahci Mar 13 01:13:01.389796 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 13 01:13:01.390016 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 13 01:13:01.390213 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 13 01:13:01.387760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 01:13:01.430591 kernel: scsi host1: ahci Mar 13 01:13:01.430845 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 13 01:13:01.431063 kernel: scsi host2: ahci Mar 13 01:13:01.431260 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 13 01:13:01.431475 kernel: scsi host3: ahci Mar 13 01:13:01.431669 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 13 01:13:01.431879 kernel: scsi host4: ahci Mar 13 01:13:01.432072 kernel: hub 1-0:1.0: USB hub found Mar 13 01:13:01.432308 kernel: scsi host5: ahci Mar 13 01:13:01.432506 kernel: hub 1-0:1.0: 4 ports detected Mar 13 01:13:01.432729 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Mar 13 01:13:01.432758 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 13 01:13:01.433030 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Mar 13 01:13:01.433051 kernel: hub 2-0:1.0: USB hub found Mar 13 01:13:01.433290 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Mar 13 01:13:01.433320 kernel: hub 2-0:1.0: 4 ports detected Mar 13 01:13:01.433531 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Mar 13 01:13:01.433551 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Mar 13 01:13:01.433569 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Mar 13 01:13:01.442370 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 01:13:01.480297 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 01:13:01.490863 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 01:13:01.491755 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 01:13:01.495457 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 01:13:01.519300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 01:13:01.519494 disk-uuid[609]: Primary Header is updated. Mar 13 01:13:01.519494 disk-uuid[609]: Secondary Entries is updated. Mar 13 01:13:01.519494 disk-uuid[609]: Secondary Header is updated. Mar 13 01:13:01.647297 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 13 01:13:01.732806 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 01:13:01.732866 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 01:13:01.732899 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 13 01:13:01.733294 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 01:13:01.736183 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 01:13:01.738347 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 01:13:01.754353 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 01:13:01.757176 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 01:13:01.758984 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 01:13:01.760712 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 01:13:01.763716 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 01:13:01.791157 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 01:13:01.793392 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 13 01:13:01.799558 kernel: usbcore: registered new interface driver usbhid Mar 13 01:13:01.799606 kernel: usbhid: USB HID core driver Mar 13 01:13:01.806297 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Mar 13 01:13:01.810306 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 13 01:13:02.535333 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 01:13:02.535908 disk-uuid[610]: The operation has completed successfully. Mar 13 01:13:02.603104 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 01:13:02.604332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 01:13:02.650600 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 01:13:02.670856 sh[637]: Success Mar 13 01:13:02.698432 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 01:13:02.698549 kernel: device-mapper: uevent: version 1.0.3 Mar 13 01:13:02.702308 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 01:13:02.714298 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Mar 13 01:13:02.768547 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 01:13:02.770622 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 01:13:02.781691 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 01:13:02.798385 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (649) Mar 13 01:13:02.803807 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 01:13:02.803887 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 01:13:02.818208 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 01:13:02.818303 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 01:13:02.821070 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 01:13:02.822527 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 01:13:02.824410 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 01:13:02.826439 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 01:13:02.829414 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 01:13:02.866308 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (681) Mar 13 01:13:02.869496 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 01:13:02.872284 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 01:13:02.879488 kernel: BTRFS info (device vda6): turning on async discard Mar 13 01:13:02.879533 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 01:13:02.887404 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 01:13:02.889621 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 01:13:02.893480 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 01:13:02.983361 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 01:13:02.986482 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 01:13:03.050699 systemd-networkd[818]: lo: Link UP Mar 13 01:13:03.050712 systemd-networkd[818]: lo: Gained carrier Mar 13 01:13:03.052912 systemd-networkd[818]: Enumeration completed Mar 13 01:13:03.053066 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 01:13:03.054070 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 01:13:03.054077 systemd-networkd[818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 01:13:03.054763 systemd-networkd[818]: eth0: Link UP Mar 13 01:13:03.056984 systemd-networkd[818]: eth0: Gained carrier Mar 13 01:13:03.056999 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 01:13:03.059018 systemd[1]: Reached target network.target - Network. Mar 13 01:13:03.084361 systemd-networkd[818]: eth0: DHCPv4 address 10.230.35.114/30, gateway 10.230.35.113 acquired from 10.230.35.113 Mar 13 01:13:03.116160 ignition[737]: Ignition 2.22.0 Mar 13 01:13:03.117383 ignition[737]: Stage: fetch-offline Mar 13 01:13:03.117471 ignition[737]: no configs at "/usr/lib/ignition/base.d" Mar 13 01:13:03.117490 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 01:13:03.118225 ignition[737]: parsed url from cmdline: "" Mar 13 01:13:03.120443 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 01:13:03.118232 ignition[737]: no config URL provided Mar 13 01:13:03.118242 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 01:13:03.118279 ignition[737]: no config at "/usr/lib/ignition/user.ign" Mar 13 01:13:03.118304 ignition[737]: failed to fetch config: resource requires networking Mar 13 01:13:03.124476 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 01:13:03.118557 ignition[737]: Ignition finished successfully Mar 13 01:13:03.166736 ignition[828]: Ignition 2.22.0 Mar 13 01:13:03.167377 ignition[828]: Stage: fetch Mar 13 01:13:03.167626 ignition[828]: no configs at "/usr/lib/ignition/base.d" Mar 13 01:13:03.167646 ignition[828]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 01:13:03.167798 ignition[828]: parsed url from cmdline: "" Mar 13 01:13:03.167805 ignition[828]: no config URL provided Mar 13 01:13:03.167815 ignition[828]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 01:13:03.167834 ignition[828]: no config at "/usr/lib/ignition/user.ign" Mar 13 01:13:03.168039 ignition[828]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 13 01:13:03.168183 ignition[828]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 13 01:13:03.168223 ignition[828]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 13 01:13:03.199960 ignition[828]: GET result: OK Mar 13 01:13:03.200243 ignition[828]: parsing config with SHA512: 91c0106e602dd6553fe08a37ab6a3f8661421707fa03dd57a2e2f59e3cf52cddf687c15a56a2217f00ee0c348e3519ccbebd5558b3f2f8673a5435f15c220baf Mar 13 01:13:03.208094 unknown[828]: fetched base config from "system" Mar 13 01:13:03.208110 unknown[828]: fetched base config from "system" Mar 13 01:13:03.208128 unknown[828]: fetched user config from "openstack" Mar 13 01:13:03.208956 ignition[828]: fetch: fetch complete Mar 13 01:13:03.208972 ignition[828]: fetch: fetch passed Mar 13 01:13:03.211289 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 01:13:03.209040 ignition[828]: Ignition finished successfully Mar 13 01:13:03.214457 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 01:13:03.255948 ignition[834]: Ignition 2.22.0 Mar 13 01:13:03.257075 ignition[834]: Stage: kargs Mar 13 01:13:03.257356 ignition[834]: no configs at "/usr/lib/ignition/base.d" Mar 13 01:13:03.257376 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 01:13:03.259913 ignition[834]: kargs: kargs passed Mar 13 01:13:03.259991 ignition[834]: Ignition finished successfully Mar 13 01:13:03.261837 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 01:13:03.264878 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 01:13:03.304901 ignition[841]: Ignition 2.22.0 Mar 13 01:13:03.304924 ignition[841]: Stage: disks Mar 13 01:13:03.305090 ignition[841]: no configs at "/usr/lib/ignition/base.d" Mar 13 01:13:03.305107 ignition[841]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 01:13:03.307639 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 01:13:03.306020 ignition[841]: disks: disks passed Mar 13 01:13:03.310008 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 01:13:03.306087 ignition[841]: Ignition finished successfully Mar 13 01:13:03.311100 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 01:13:03.312573 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 01:13:03.314152 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 01:13:03.315512 systemd[1]: Reached target basic.target - Basic System. Mar 13 01:13:03.319333 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 01:13:03.351017 systemd-fsck[849]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 13 01:13:03.355030 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 01:13:03.357325 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 01:13:03.489298 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 01:13:03.490295 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 01:13:03.491571 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 01:13:03.494003 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 01:13:03.496328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 01:13:03.499783 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 01:13:03.506123 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 13 01:13:03.507977 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 01:13:03.509415 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 01:13:03.512757 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 01:13:03.517424 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 01:13:03.520668 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (857) Mar 13 01:13:03.523453 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 01:13:03.523487 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 01:13:03.532889 kernel: BTRFS info (device vda6): turning on async discard Mar 13 01:13:03.532927 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 01:13:03.547787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 01:13:03.612303 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:03.618191 initrd-setup-root[886]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 01:13:03.627036 initrd-setup-root[893]: cut: /sysroot/etc/group: No such file or directory Mar 13 01:13:03.635710 initrd-setup-root[900]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 01:13:03.642959 initrd-setup-root[907]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 01:13:03.753993 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 01:13:03.757039 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 01:13:03.759425 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 01:13:03.784456 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 01:13:03.796444 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 01:13:03.804931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 01:13:03.825751 ignition[976]: INFO : Ignition 2.22.0 Mar 13 01:13:03.827421 ignition[976]: INFO : Stage: mount Mar 13 01:13:03.827421 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 01:13:03.827421 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 01:13:03.829961 ignition[976]: INFO : mount: mount passed Mar 13 01:13:03.829961 ignition[976]: INFO : Ignition finished successfully Mar 13 01:13:03.829719 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 01:13:04.377547 systemd-networkd[818]: eth0: Gained IPv6LL Mar 13 01:13:04.651331 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:05.885340 systemd-networkd[818]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88dc:24:19ff:fee6:2372/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88dc:24:19ff:fee6:2372/64 assigned by NDisc. Mar 13 01:13:05.885358 systemd-networkd[818]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 13 01:13:06.660319 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:10.673297 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:10.681786 coreos-metadata[859]: Mar 13 01:13:10.681 WARN failed to locate config-drive, using the metadata service API instead Mar 13 01:13:10.706735 coreos-metadata[859]: Mar 13 01:13:10.706 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 13 01:13:10.720640 coreos-metadata[859]: Mar 13 01:13:10.720 INFO Fetch successful Mar 13 01:13:10.721726 coreos-metadata[859]: Mar 13 01:13:10.721 INFO wrote hostname srv-1hh7x.gb1.brightbox.com to /sysroot/etc/hostname Mar 13 01:13:10.724206 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 13 01:13:10.724469 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 13 01:13:10.730734 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 01:13:10.754335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 01:13:10.778340 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (991) Mar 13 01:13:10.782327 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 01:13:10.785297 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 01:13:10.790966 kernel: BTRFS info (device vda6): turning on async discard Mar 13 01:13:10.791005 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 01:13:10.794444 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 01:13:10.837284 ignition[1009]: INFO : Ignition 2.22.0 Mar 13 01:13:10.837284 ignition[1009]: INFO : Stage: files Mar 13 01:13:10.839098 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 01:13:10.839098 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 01:13:10.839098 ignition[1009]: DEBUG : files: compiled without relabeling support, skipping Mar 13 01:13:10.841893 ignition[1009]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 01:13:10.841893 ignition[1009]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 01:13:10.849912 ignition[1009]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 01:13:10.849912 ignition[1009]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 01:13:10.849912 ignition[1009]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 01:13:10.849912 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 01:13:10.849912 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 01:13:10.846708 unknown[1009]: wrote ssh authorized keys file for user: core Mar 13 01:13:10.996503 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 01:13:11.330671 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 01:13:11.330671 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 01:13:11.333504 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 13 01:13:11.645550 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 13 01:13:12.136354 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 13 01:13:12.136354 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 13 01:13:12.136354 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 01:13:12.136354 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 01:13:12.143802 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 01:13:12.143802 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 01:13:12.143802 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 01:13:12.143802 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 01:13:12.143802 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 01:13:12.149859 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 01:13:12.149859 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 01:13:12.149859 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 01:13:12.149859 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 01:13:12.149859 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 01:13:12.149859 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 13 01:13:12.693056 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 13 01:13:16.830087 ignition[1009]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 01:13:16.830087 ignition[1009]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 13 01:13:16.834024 ignition[1009]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 01:13:16.834024 ignition[1009]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 01:13:16.834024 ignition[1009]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 13 01:13:16.837706 ignition[1009]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Mar 13 01:13:16.837706 ignition[1009]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 01:13:16.837706 ignition[1009]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 01:13:16.837706 ignition[1009]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 01:13:16.837706 ignition[1009]: INFO : files: files passed Mar 13 01:13:16.837706 ignition[1009]: INFO : Ignition finished successfully Mar 13 01:13:16.837959 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 01:13:16.845573 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 01:13:16.848453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 01:13:16.866122 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 01:13:16.866395 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 01:13:16.878767 initrd-setup-root-after-ignition[1043]: grep: Mar 13 01:13:16.878767 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 01:13:16.878767 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 01:13:16.884402 initrd-setup-root-after-ignition[1043]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 01:13:16.883325 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 01:13:16.884636 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 01:13:16.887431 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 01:13:16.948541 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 01:13:16.949685 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 01:13:16.951873 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 01:13:16.953175 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 01:13:16.955011 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 01:13:16.957472 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 01:13:16.987147 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 01:13:16.991181 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 01:13:17.018986 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 01:13:17.020891 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 01:13:17.021780 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 01:13:17.024392 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 01:13:17.024633 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 01:13:17.025747 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 01:13:17.028539 systemd[1]: Stopped target basic.target - Basic System. Mar 13 01:13:17.029830 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 01:13:17.031359 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 01:13:17.033015 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 01:13:17.034597 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 01:13:17.036272 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 01:13:17.037925 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 01:13:17.039646 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 01:13:17.041203 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 01:13:17.042936 systemd[1]: Stopped target swap.target - Swaps. Mar 13 01:13:17.044139 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 01:13:17.044348 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 01:13:17.046162 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 01:13:17.047236 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 01:13:17.048807 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 01:13:17.049008 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 01:13:17.056179 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 01:13:17.056473 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 01:13:17.058115 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 01:13:17.058395 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 01:13:17.060362 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 01:13:17.060590 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 01:13:17.064557 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 01:13:17.067492 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 01:13:17.069098 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 01:13:17.070483 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 01:13:17.071432 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 01:13:17.071615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 01:13:17.081040 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 01:13:17.084177 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 01:13:17.105757 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 01:13:17.112023 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 01:13:17.112224 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 01:13:17.122967 ignition[1063]: INFO : Ignition 2.22.0 Mar 13 01:13:17.122967 ignition[1063]: INFO : Stage: umount Mar 13 01:13:17.122967 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 01:13:17.122967 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 13 01:13:17.129347 ignition[1063]: INFO : umount: umount passed Mar 13 01:13:17.129347 ignition[1063]: INFO : Ignition finished successfully Mar 13 01:13:17.129613 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 01:13:17.129857 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 01:13:17.131101 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 01:13:17.131209 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 01:13:17.132429 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 01:13:17.132499 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 01:13:17.133715 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 01:13:17.133780 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 01:13:17.135133 systemd[1]: Stopped target network.target - Network. Mar 13 01:13:17.136428 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 01:13:17.136510 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 01:13:17.137872 systemd[1]: Stopped target paths.target - Path Units. Mar 13 01:13:17.139206 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 01:13:17.139342 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 01:13:17.140801 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 01:13:17.142118 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 01:13:17.143535 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 01:13:17.143609 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 01:13:17.144940 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 01:13:17.145015 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 01:13:17.146663 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 01:13:17.146743 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 01:13:17.148248 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 01:13:17.148360 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 01:13:17.149670 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 01:13:17.149740 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 01:13:17.151525 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 01:13:17.153748 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 01:13:17.158446 systemd-networkd[818]: eth0: DHCPv6 lease lost Mar 13 01:13:17.162163 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 01:13:17.163497 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 01:13:17.168848 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 01:13:17.169348 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 01:13:17.169549 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 01:13:17.172402 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 01:13:17.173397 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 01:13:17.174435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 01:13:17.174510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 01:13:17.177195 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 01:13:17.179320 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 01:13:17.179407 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 01:13:17.180226 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 01:13:17.183411 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 01:13:17.185419 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 01:13:17.185500 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 01:13:17.186395 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 01:13:17.186460 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 01:13:17.188104 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 01:13:17.191252 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 01:13:17.191388 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 01:13:17.199819 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 01:13:17.200937 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 01:13:17.203036 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 01:13:17.203127 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 01:13:17.204888 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 01:13:17.204945 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 01:13:17.208689 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 01:13:17.208765 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 01:13:17.210993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 01:13:17.211075 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 01:13:17.212451 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 01:13:17.212533 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 01:13:17.215195 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 01:13:17.217661 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 01:13:17.217742 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 01:13:17.220424 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 01:13:17.220497 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 01:13:17.223533 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 13 01:13:17.223607 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 01:13:17.232462 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 01:13:17.232538 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 01:13:17.234143 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 01:13:17.234228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 01:13:17.238996 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 01:13:17.239076 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 13 01:13:17.239146 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 01:13:17.239226 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 01:13:17.241036 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 01:13:17.241180 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 01:13:17.242460 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 01:13:17.242597 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 01:13:17.245192 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 01:13:17.247682 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 01:13:17.272833 systemd[1]: Switching root. Mar 13 01:13:17.309668 systemd-journald[210]: Journal stopped Mar 13 01:13:19.099362 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Mar 13 01:13:19.099495 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 01:13:19.099542 kernel: SELinux: policy capability open_perms=1 Mar 13 01:13:19.099569 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 01:13:19.099589 kernel: SELinux: policy capability always_check_network=0 Mar 13 01:13:19.099615 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 01:13:19.099650 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 01:13:19.099676 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 01:13:19.099701 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 01:13:19.099724 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 01:13:19.099754 kernel: audit: type=1403 audit(1773364397.807:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 01:13:19.099800 systemd[1]: Successfully loaded SELinux policy in 75.229ms. Mar 13 01:13:19.099848 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.194ms. Mar 13 01:13:19.099884 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 01:13:19.099919 systemd[1]: Detected virtualization kvm. Mar 13 01:13:19.099954 systemd[1]: Detected architecture x86-64. Mar 13 01:13:19.099982 systemd[1]: Detected first boot. Mar 13 01:13:19.100002 systemd[1]: Hostname set to . Mar 13 01:13:19.100033 systemd[1]: Initializing machine ID from VM UUID. Mar 13 01:13:19.100067 zram_generator::config[1108]: No configuration found. Mar 13 01:13:19.100088 kernel: Guest personality initialized and is inactive Mar 13 01:13:19.100114 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 01:13:19.100139 kernel: Initialized host personality Mar 13 01:13:19.100164 kernel: NET: Registered PF_VSOCK protocol family Mar 13 01:13:19.100189 systemd[1]: Populated /etc with preset unit settings. Mar 13 01:13:19.100211 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 01:13:19.100233 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 01:13:19.100299 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 01:13:19.100323 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 01:13:19.100345 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 01:13:19.100386 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 01:13:19.100423 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 01:13:19.100457 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 01:13:19.100481 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 01:13:19.100509 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 01:13:19.100531 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 01:13:19.100553 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 01:13:19.100574 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 01:13:19.100594 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 01:13:19.100616 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 01:13:19.100648 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 01:13:19.100678 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 01:13:19.100701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 01:13:19.100736 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 01:13:19.100758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 01:13:19.100782 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 01:13:19.100803 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 01:13:19.100838 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 01:13:19.100862 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 01:13:19.100893 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 01:13:19.100916 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 01:13:19.100943 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 01:13:19.100965 systemd[1]: Reached target slices.target - Slice Units. Mar 13 01:13:19.100994 systemd[1]: Reached target swap.target - Swaps. Mar 13 01:13:19.101021 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 01:13:19.101043 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 01:13:19.101078 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 01:13:19.101107 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 01:13:19.101135 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 01:13:19.101157 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 01:13:19.101177 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 01:13:19.101207 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 01:13:19.101229 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 01:13:19.101280 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 01:13:19.101306 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 01:13:19.101341 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 01:13:19.101364 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 01:13:19.101385 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 01:13:19.101406 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 01:13:19.101427 systemd[1]: Reached target machines.target - Containers. Mar 13 01:13:19.101447 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 01:13:19.101467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 01:13:19.101488 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 01:13:19.101518 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 01:13:19.101559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 01:13:19.101582 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 01:13:19.101602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 01:13:19.101623 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 01:13:19.101656 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 01:13:19.101684 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 01:13:19.101719 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 01:13:19.101741 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 01:13:19.101775 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 01:13:19.101797 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 01:13:19.101819 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 01:13:19.101847 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 01:13:19.101887 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 01:13:19.101915 kernel: loop: module loaded Mar 13 01:13:19.101942 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 01:13:19.101970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 01:13:19.101993 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 01:13:19.102029 kernel: fuse: init (API version 7.41) Mar 13 01:13:19.102057 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 01:13:19.102083 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 01:13:19.102104 systemd[1]: Stopped verity-setup.service. Mar 13 01:13:19.102130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 01:13:19.102151 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 01:13:19.102171 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 01:13:19.102192 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 01:13:19.102212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 01:13:19.102253 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 01:13:19.102290 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 01:13:19.102321 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 01:13:19.102344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 01:13:19.102365 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 01:13:19.102387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 01:13:19.102407 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 01:13:19.102428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 01:13:19.102450 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 01:13:19.102492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 01:13:19.102516 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 01:13:19.102537 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 01:13:19.102558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 01:13:19.102590 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 01:13:19.102611 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 01:13:19.102637 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 01:13:19.102658 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 01:13:19.102679 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 01:13:19.102726 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 01:13:19.102761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 01:13:19.102797 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 01:13:19.102854 systemd-journald[1195]: Collecting audit messages is disabled. Mar 13 01:13:19.102903 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 01:13:19.102933 systemd-journald[1195]: Journal started Mar 13 01:13:19.102980 systemd-journald[1195]: Runtime Journal (/run/log/journal/4c77ba597a564220972b155e0f6b9136) is 4.7M, max 37.8M, 33.1M free. Mar 13 01:13:18.650655 systemd[1]: Queued start job for default target multi-user.target. Mar 13 01:13:18.674676 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 01:13:18.675441 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 01:13:19.110290 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 01:13:19.110341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 01:13:19.130314 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 01:13:19.143313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 01:13:19.149357 kernel: ACPI: bus type drm_connector registered Mar 13 01:13:19.149407 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 01:13:19.156598 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 01:13:19.164292 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 01:13:19.173591 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 01:13:19.190327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 01:13:19.194292 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 01:13:19.201606 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 01:13:19.203797 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 01:13:19.204500 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 01:13:19.209688 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 01:13:19.225840 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 01:13:19.227508 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 01:13:19.229463 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 01:13:19.238296 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 01:13:19.275287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 01:13:19.279370 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 01:13:19.279726 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 01:13:19.283176 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 01:13:19.306536 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 13 01:13:19.307512 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 13 01:13:19.325318 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 01:13:19.337660 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 01:13:19.346403 systemd-journald[1195]: Time spent on flushing to /var/log/journal/4c77ba597a564220972b155e0f6b9136 is 90.828ms for 1179 entries. Mar 13 01:13:19.346403 systemd-journald[1195]: System Journal (/var/log/journal/4c77ba597a564220972b155e0f6b9136) is 8M, max 584.8M, 576.8M free. Mar 13 01:13:19.485394 systemd-journald[1195]: Received client request to flush runtime journal. Mar 13 01:13:19.485456 kernel: loop1: detected capacity change from 0 to 128560 Mar 13 01:13:19.485484 kernel: loop2: detected capacity change from 0 to 8 Mar 13 01:13:19.485523 kernel: loop3: detected capacity change from 0 to 228704 Mar 13 01:13:19.342666 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 01:13:19.394884 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 01:13:19.470686 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 01:13:19.483658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 01:13:19.490384 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 01:13:19.529336 kernel: loop4: detected capacity change from 0 to 110984 Mar 13 01:13:19.559291 kernel: loop5: detected capacity change from 0 to 128560 Mar 13 01:13:19.581295 kernel: loop6: detected capacity change from 0 to 8 Mar 13 01:13:19.589293 kernel: loop7: detected capacity change from 0 to 228704 Mar 13 01:13:19.600846 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Mar 13 01:13:19.600874 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Mar 13 01:13:19.609737 (sd-merge)[1274]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 13 01:13:19.609802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 01:13:19.610568 (sd-merge)[1274]: Merged extensions into '/usr'. Mar 13 01:13:19.621822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 01:13:19.627474 systemd[1]: Reload requested from client PID 1228 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 01:13:19.627508 systemd[1]: Reloading... Mar 13 01:13:19.805330 zram_generator::config[1299]: No configuration found. Mar 13 01:13:19.960383 ldconfig[1221]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 01:13:20.214255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 01:13:20.215523 systemd[1]: Reloading finished in 587 ms. Mar 13 01:13:20.237764 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 01:13:20.239243 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 01:13:20.251518 systemd[1]: Starting ensure-sysext.service... Mar 13 01:13:20.259440 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 01:13:20.281407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 01:13:20.289813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 01:13:20.295397 systemd[1]: Reload requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Mar 13 01:13:20.295431 systemd[1]: Reloading... Mar 13 01:13:20.321867 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 01:13:20.321920 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 01:13:20.322491 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 01:13:20.322996 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 01:13:20.324506 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 01:13:20.324939 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Mar 13 01:13:20.325035 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Mar 13 01:13:20.339447 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 01:13:20.339464 systemd-tmpfiles[1360]: Skipping /boot Mar 13 01:13:20.360380 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 01:13:20.360399 systemd-tmpfiles[1360]: Skipping /boot Mar 13 01:13:20.367087 systemd-udevd[1363]: Using default interface naming scheme 'v255'. Mar 13 01:13:20.400303 zram_generator::config[1384]: No configuration found. Mar 13 01:13:20.778303 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 01:13:20.842307 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 13 01:13:20.882873 kernel: ACPI: button: Power Button [PWRF] Mar 13 01:13:20.913740 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 01:13:20.913996 systemd[1]: Reloading finished in 617 ms. Mar 13 01:13:20.927693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 01:13:20.938701 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 01:13:20.983296 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 01:13:20.989293 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 01:13:20.994870 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 01:13:20.999315 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 01:13:21.002726 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 01:13:21.007656 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 01:13:21.008636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 01:13:21.016321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 01:13:21.019627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 01:13:21.028633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 01:13:21.030528 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 01:13:21.033093 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 01:13:21.035111 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 01:13:21.038651 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 01:13:21.046707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 01:13:21.060622 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 01:13:21.067643 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 01:13:21.068439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 01:13:21.077449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 01:13:21.097047 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 01:13:21.099733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 01:13:21.106781 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 01:13:21.109722 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 01:13:21.110473 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 01:13:21.114687 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 01:13:21.115490 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 01:13:21.132729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 01:13:21.147348 systemd[1]: Finished ensure-sysext.service. Mar 13 01:13:21.156545 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 01:13:21.184464 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 01:13:21.194816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 01:13:21.206523 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 01:13:21.229463 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 01:13:21.230766 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 01:13:21.236442 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 01:13:21.236776 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 01:13:21.240999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 01:13:21.242406 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 01:13:21.243712 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 01:13:21.245356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 01:13:21.246688 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 01:13:21.248444 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 01:13:21.253416 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 01:13:21.253500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 01:13:21.284906 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 01:13:21.313639 augenrules[1534]: No rules Mar 13 01:13:21.316709 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 01:13:21.317043 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 01:13:21.336753 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 01:13:21.468364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 01:13:21.532750 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 01:13:21.535582 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 01:13:21.564216 systemd-networkd[1493]: lo: Link UP Mar 13 01:13:21.565159 systemd-networkd[1493]: lo: Gained carrier Mar 13 01:13:21.567679 systemd-networkd[1493]: Enumeration completed Mar 13 01:13:21.567903 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 01:13:21.568470 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 01:13:21.568584 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 01:13:21.570076 systemd-networkd[1493]: eth0: Link UP Mar 13 01:13:21.570480 systemd-networkd[1493]: eth0: Gained carrier Mar 13 01:13:21.570606 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 01:13:21.573542 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 01:13:21.576552 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 01:13:21.586335 systemd-networkd[1493]: eth0: DHCPv4 address 10.230.35.114/30, gateway 10.230.35.113 acquired from 10.230.35.113 Mar 13 01:13:21.587868 systemd-resolved[1494]: Positive Trust Anchors: Mar 13 01:13:21.587885 systemd-resolved[1494]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 01:13:21.587967 systemd-resolved[1494]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 01:13:21.588377 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Mar 13 01:13:21.597047 systemd-resolved[1494]: Using system hostname 'srv-1hh7x.gb1.brightbox.com'. Mar 13 01:13:21.599725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 01:13:21.600671 systemd[1]: Reached target network.target - Network. Mar 13 01:13:21.601444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 01:13:21.602236 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 01:13:21.603073 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 01:13:21.603928 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 01:13:21.604746 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 01:13:21.605779 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 01:13:21.606644 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 01:13:21.607448 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 01:13:21.608212 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 01:13:21.608281 systemd[1]: Reached target paths.target - Path Units. Mar 13 01:13:21.608907 systemd[1]: Reached target timers.target - Timer Units. Mar 13 01:13:21.611043 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 01:13:21.614138 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 01:13:21.619282 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 01:13:21.620996 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 01:13:21.621822 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 01:13:21.629214 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 01:13:21.630798 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 01:13:21.633884 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 01:13:21.635069 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 01:13:21.637709 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 01:13:21.638464 systemd[1]: Reached target basic.target - Basic System. Mar 13 01:13:21.639243 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 01:13:21.639370 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 01:13:21.642388 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 01:13:21.645551 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 01:13:21.651546 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 01:13:21.658545 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 01:13:21.664491 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 01:13:21.672745 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 01:13:21.674120 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 01:13:21.681547 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 01:13:21.691928 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 01:13:21.697343 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:21.702923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 01:13:21.708566 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 01:13:21.714735 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 01:13:21.732231 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 01:13:21.736089 jq[1560]: false Mar 13 01:13:21.736335 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 01:13:21.737076 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 01:13:21.742609 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 01:13:21.748877 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 01:13:21.754022 extend-filesystems[1561]: Found /dev/vda6 Mar 13 01:13:21.762040 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing passwd entry cache Mar 13 01:13:21.762048 oslogin_cache_refresh[1562]: Refreshing passwd entry cache Mar 13 01:13:21.763377 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 01:13:21.765418 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 01:13:21.765870 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 01:13:21.787334 extend-filesystems[1561]: Found /dev/vda9 Mar 13 01:13:21.787334 extend-filesystems[1561]: Checking size of /dev/vda9 Mar 13 01:13:21.809635 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 01:13:21.811408 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 01:13:21.813212 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 01:13:21.814164 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 01:13:21.820225 jq[1576]: true Mar 13 01:13:21.844293 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting users, quitting Mar 13 01:13:21.844293 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 01:13:21.844293 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Refreshing group entry cache Mar 13 01:13:21.840889 oslogin_cache_refresh[1562]: Failure getting users, quitting Mar 13 01:13:21.840914 oslogin_cache_refresh[1562]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 01:13:21.840985 oslogin_cache_refresh[1562]: Refreshing group entry cache Mar 13 01:13:21.845047 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Failure getting groups, quitting Mar 13 01:13:21.845047 google_oslogin_nss_cache[1562]: oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 01:13:21.845040 oslogin_cache_refresh[1562]: Failure getting groups, quitting Mar 13 01:13:21.845056 oslogin_cache_refresh[1562]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 01:13:21.849754 (ntainerd)[1592]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 01:13:21.857138 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 01:13:21.857531 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 01:13:21.879510 extend-filesystems[1561]: Resized partition /dev/vda9 Mar 13 01:13:21.883291 tar[1588]: linux-amd64/LICENSE Mar 13 01:13:21.883291 tar[1588]: linux-amd64/helm Mar 13 01:13:21.890842 extend-filesystems[1606]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 01:13:21.893944 dbus-daemon[1558]: [system] SELinux support is enabled Mar 13 01:13:21.894196 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 01:13:21.900846 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 01:13:21.900920 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 01:13:21.902574 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 01:13:21.902633 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 01:13:21.910307 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 13 01:13:21.910697 update_engine[1574]: I20260313 01:13:21.910562 1574 main.cc:92] Flatcar Update Engine starting Mar 13 01:13:21.917345 jq[1599]: true Mar 13 01:13:21.926530 dbus-daemon[1558]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1493 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 01:13:21.940607 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 01:13:21.947704 systemd[1]: Started update-engine.service - Update Engine. Mar 13 01:13:21.950550 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 01:13:21.953400 update_engine[1574]: I20260313 01:13:21.953326 1574 update_check_scheduler.cc:74] Next update check in 2m20s Mar 13 01:13:22.005923 systemd-logind[1572]: Watching system buttons on /dev/input/event3 (Power Button) Mar 13 01:13:22.006390 systemd-logind[1572]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 01:13:22.006818 systemd-logind[1572]: New seat seat0. Mar 13 01:13:22.008232 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 01:13:22.249535 bash[1627]: Updated "/home/core/.ssh/authorized_keys" Mar 13 01:13:22.258681 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 01:13:22.268746 systemd[1]: Starting sshkeys.service... Mar 13 01:13:22.281292 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 13 01:13:22.300006 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 01:13:22.300006 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 13 01:13:22.300006 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 13 01:13:22.307193 extend-filesystems[1561]: Resized filesystem in /dev/vda9 Mar 13 01:13:22.301308 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 01:13:22.302456 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 01:13:22.323416 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 01:13:22.328736 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 01:13:22.352302 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:22.436429 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 01:13:22.463610 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 01:13:22.465649 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 01:13:22.467200 dbus-daemon[1558]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1609 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 01:13:22.477256 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 01:13:22.481859 containerd[1592]: time="2026-03-13T01:13:22Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 01:13:22.483784 containerd[1592]: time="2026-03-13T01:13:22.483742939Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 01:13:22.543300 containerd[1592]: time="2026-03-13T01:13:22.539909228Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="27.275µs" Mar 13 01:13:22.544287 containerd[1592]: time="2026-03-13T01:13:22.543430166Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 01:13:22.544287 containerd[1592]: time="2026-03-13T01:13:22.543478800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 01:13:22.544287 containerd[1592]: time="2026-03-13T01:13:22.543788475Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 01:13:22.544287 containerd[1592]: time="2026-03-13T01:13:22.543819383Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 01:13:22.544287 containerd[1592]: time="2026-03-13T01:13:22.543877203Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 01:13:22.544287 containerd[1592]: time="2026-03-13T01:13:22.544003087Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 01:13:22.544287 containerd[1592]: time="2026-03-13T01:13:22.544036551Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549132107Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549190612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549214360Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549228762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549384513Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549820751Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549878129Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 01:13:22.549938 containerd[1592]: time="2026-03-13T01:13:22.549897690Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 01:13:22.555160 containerd[1592]: time="2026-03-13T01:13:22.554306604Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 01:13:22.555160 containerd[1592]: time="2026-03-13T01:13:22.554789150Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 01:13:22.555160 containerd[1592]: time="2026-03-13T01:13:22.554886679Z" level=info msg="metadata content store policy set" policy=shared Mar 13 01:13:22.561036 containerd[1592]: time="2026-03-13T01:13:22.561005001Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 01:13:22.561193 containerd[1592]: time="2026-03-13T01:13:22.561165279Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 01:13:22.561350 containerd[1592]: time="2026-03-13T01:13:22.561321098Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 01:13:22.561469 containerd[1592]: time="2026-03-13T01:13:22.561436310Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564058811Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564095084Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564119032Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564163914Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564189201Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564206560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564222811Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564243060Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564453073Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564493178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564520998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564541716Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564560121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 01:13:22.565291 containerd[1592]: time="2026-03-13T01:13:22.564578736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564610582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564629650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564648540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564667124Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564684392Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564780603Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564804074Z" level=info msg="Start snapshots syncer" Mar 13 01:13:22.565746 containerd[1592]: time="2026-03-13T01:13:22.564843926Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 01:13:22.571293 containerd[1592]: time="2026-03-13T01:13:22.569312177Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 01:13:22.571293 containerd[1592]: time="2026-03-13T01:13:22.569421776Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569519686Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569708287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569740207Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569758299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569788707Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569829099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569850555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569881007Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569922428Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569944859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.569963085Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.570034752Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.570064988Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 01:13:22.571590 containerd[1592]: time="2026-03-13T01:13:22.570081049Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570096589Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570110588Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570132022Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570188817Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570230580Z" level=info msg="runtime interface created" Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570242568Z" level=info msg="created NRI interface" Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570256029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570296577Z" level=info msg="Connect containerd service" Mar 13 01:13:22.572030 containerd[1592]: time="2026-03-13T01:13:22.570334095Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 01:13:22.576073 containerd[1592]: time="2026-03-13T01:13:22.575674647Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 01:13:22.719888 polkitd[1643]: Started polkitd version 126 Mar 13 01:13:22.731764 polkitd[1643]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 01:13:22.738552 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 01:13:22.735136 polkitd[1643]: Loading rules from directory /run/polkit-1/rules.d Mar 13 01:13:22.735233 polkitd[1643]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 01:13:22.735627 polkitd[1643]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 01:13:22.735667 polkitd[1643]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 01:13:22.735730 polkitd[1643]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 01:13:22.738199 polkitd[1643]: Finished loading, compiling and executing 2 rules Mar 13 01:13:22.741707 dbus-daemon[1558]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 01:13:22.742643 polkitd[1643]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 01:13:22.775798 systemd-hostnamed[1609]: Hostname set to (static) Mar 13 01:13:22.836209 containerd[1592]: time="2026-03-13T01:13:22.835937155Z" level=info msg="Start subscribing containerd event" Mar 13 01:13:22.836474 containerd[1592]: time="2026-03-13T01:13:22.836389944Z" level=info msg="Start recovering state" Mar 13 01:13:22.836843 containerd[1592]: time="2026-03-13T01:13:22.836811020Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 01:13:22.837407 containerd[1592]: time="2026-03-13T01:13:22.837380238Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 01:13:22.837538 containerd[1592]: time="2026-03-13T01:13:22.837090302Z" level=info msg="Start event monitor" Mar 13 01:13:22.837921 containerd[1592]: time="2026-03-13T01:13:22.837895475Z" level=info msg="Start cni network conf syncer for default" Mar 13 01:13:22.838048 containerd[1592]: time="2026-03-13T01:13:22.838025493Z" level=info msg="Start streaming server" Mar 13 01:13:22.838170 containerd[1592]: time="2026-03-13T01:13:22.838136612Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 01:13:22.838262 containerd[1592]: time="2026-03-13T01:13:22.838240876Z" level=info msg="runtime interface starting up..." Mar 13 01:13:22.839403 containerd[1592]: time="2026-03-13T01:13:22.839095149Z" level=info msg="starting plugins..." Mar 13 01:13:22.839403 containerd[1592]: time="2026-03-13T01:13:22.839153713Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 01:13:22.839403 containerd[1592]: time="2026-03-13T01:13:22.839365170Z" level=info msg="containerd successfully booted in 0.359680s" Mar 13 01:13:22.839514 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 01:13:22.876818 sshd_keygen[1605]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 01:13:22.907875 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 01:13:22.914673 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 01:13:22.932928 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 01:13:22.933294 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 01:13:22.937580 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 01:13:22.962346 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 01:13:22.967458 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 01:13:22.971915 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 01:13:22.980131 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 01:13:23.004667 tar[1588]: linux-amd64/README.md Mar 13 01:13:23.029470 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 01:13:23.193638 systemd-networkd[1493]: eth0: Gained IPv6LL Mar 13 01:13:23.194754 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Mar 13 01:13:23.198003 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 01:13:23.200854 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 01:13:23.205604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:13:23.207964 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 01:13:23.246318 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:23.250177 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 01:13:23.409524 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:24.231807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:13:24.243934 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 01:13:24.311064 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 01:13:24.314171 systemd[1]: Started sshd@0-10.230.35.114:22-20.161.92.111:56492.service - OpenSSH per-connection server daemon (20.161.92.111:56492). Mar 13 01:13:24.699289 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Mar 13 01:13:24.699946 systemd-networkd[1493]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88dc:24:19ff:fee6:2372/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88dc:24:19ff:fee6:2372/64 assigned by NDisc. Mar 13 01:13:24.699959 systemd-networkd[1493]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 13 01:13:24.867730 sshd[1706]: Accepted publickey for core from 20.161.92.111 port 56492 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:24.871003 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:24.898798 systemd-logind[1572]: New session 1 of user core. Mar 13 01:13:24.901652 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 01:13:24.907793 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 01:13:24.919133 kubelet[1704]: E0313 01:13:24.919031 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 01:13:24.921593 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 01:13:24.921853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 01:13:24.923065 systemd[1]: kubelet.service: Consumed 1.101s CPU time, 267.5M memory peak. Mar 13 01:13:24.941555 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 01:13:24.946174 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 01:13:24.964473 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 01:13:24.969228 systemd-logind[1572]: New session c1 of user core. Mar 13 01:13:25.171049 systemd[1719]: Queued start job for default target default.target. Mar 13 01:13:25.191232 systemd[1719]: Created slice app.slice - User Application Slice. Mar 13 01:13:25.191445 systemd[1719]: Reached target paths.target - Paths. Mar 13 01:13:25.191658 systemd[1719]: Reached target timers.target - Timers. Mar 13 01:13:25.193778 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 01:13:25.210159 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 01:13:25.210538 systemd[1719]: Reached target sockets.target - Sockets. Mar 13 01:13:25.210787 systemd[1719]: Reached target basic.target - Basic System. Mar 13 01:13:25.211027 systemd[1719]: Reached target default.target - Main User Target. Mar 13 01:13:25.211099 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 01:13:25.211333 systemd[1719]: Startup finished in 231ms. Mar 13 01:13:25.219616 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 01:13:25.263304 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:25.421292 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:25.511405 systemd[1]: Started sshd@1-10.230.35.114:22-20.161.92.111:56502.service - OpenSSH per-connection server daemon (20.161.92.111:56502). Mar 13 01:13:25.946055 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Mar 13 01:13:26.016311 sshd[1732]: Accepted publickey for core from 20.161.92.111 port 56502 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:26.017802 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:26.027135 systemd-logind[1572]: New session 2 of user core. Mar 13 01:13:26.036692 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 01:13:26.289133 sshd[1735]: Connection closed by 20.161.92.111 port 56502 Mar 13 01:13:26.290533 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Mar 13 01:13:26.297755 systemd[1]: sshd@1-10.230.35.114:22-20.161.92.111:56502.service: Deactivated successfully. Mar 13 01:13:26.301752 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 01:13:26.303679 systemd-logind[1572]: Session 2 logged out. Waiting for processes to exit. Mar 13 01:13:26.306751 systemd-logind[1572]: Removed session 2. Mar 13 01:13:26.390782 systemd[1]: Started sshd@2-10.230.35.114:22-20.161.92.111:56514.service - OpenSSH per-connection server daemon (20.161.92.111:56514). Mar 13 01:13:26.889317 sshd[1741]: Accepted publickey for core from 20.161.92.111 port 56514 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:26.890844 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:26.898794 systemd-logind[1572]: New session 3 of user core. Mar 13 01:13:26.914102 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 01:13:27.161942 sshd[1744]: Connection closed by 20.161.92.111 port 56514 Mar 13 01:13:27.162925 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Mar 13 01:13:27.169672 systemd[1]: sshd@2-10.230.35.114:22-20.161.92.111:56514.service: Deactivated successfully. Mar 13 01:13:27.169687 systemd-logind[1572]: Session 3 logged out. Waiting for processes to exit. Mar 13 01:13:27.172643 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 01:13:27.175920 systemd-logind[1572]: Removed session 3. Mar 13 01:13:28.051483 login[1681]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 13 01:13:28.068208 systemd-logind[1572]: New session 4 of user core. Mar 13 01:13:28.075177 login[1680]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 13 01:13:28.076062 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 01:13:28.086957 systemd-logind[1572]: New session 5 of user core. Mar 13 01:13:28.095553 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 01:13:29.276440 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:29.286086 coreos-metadata[1557]: Mar 13 01:13:29.286 WARN failed to locate config-drive, using the metadata service API instead Mar 13 01:13:29.312152 coreos-metadata[1557]: Mar 13 01:13:29.312 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 13 01:13:29.318214 coreos-metadata[1557]: Mar 13 01:13:29.318 INFO Fetch failed with 404: resource not found Mar 13 01:13:29.318214 coreos-metadata[1557]: Mar 13 01:13:29.318 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 13 01:13:29.319044 coreos-metadata[1557]: Mar 13 01:13:29.319 INFO Fetch successful Mar 13 01:13:29.319250 coreos-metadata[1557]: Mar 13 01:13:29.319 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 13 01:13:29.332536 coreos-metadata[1557]: Mar 13 01:13:29.332 INFO Fetch successful Mar 13 01:13:29.332536 coreos-metadata[1557]: Mar 13 01:13:29.332 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 13 01:13:29.348887 coreos-metadata[1557]: Mar 13 01:13:29.348 INFO Fetch successful Mar 13 01:13:29.348887 coreos-metadata[1557]: Mar 13 01:13:29.348 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 13 01:13:29.361223 coreos-metadata[1557]: Mar 13 01:13:29.361 INFO Fetch successful Mar 13 01:13:29.361223 coreos-metadata[1557]: Mar 13 01:13:29.361 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 13 01:13:29.378505 coreos-metadata[1557]: Mar 13 01:13:29.378 INFO Fetch successful Mar 13 01:13:29.417310 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 01:13:29.418201 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 01:13:29.433305 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Mar 13 01:13:29.445622 coreos-metadata[1639]: Mar 13 01:13:29.445 WARN failed to locate config-drive, using the metadata service API instead Mar 13 01:13:29.469259 coreos-metadata[1639]: Mar 13 01:13:29.469 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 13 01:13:29.493345 coreos-metadata[1639]: Mar 13 01:13:29.493 INFO Fetch successful Mar 13 01:13:29.493543 coreos-metadata[1639]: Mar 13 01:13:29.493 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 13 01:13:29.519710 coreos-metadata[1639]: Mar 13 01:13:29.519 INFO Fetch successful Mar 13 01:13:29.521985 unknown[1639]: wrote ssh authorized keys file for user: core Mar 13 01:13:29.546943 update-ssh-keys[1784]: Updated "/home/core/.ssh/authorized_keys" Mar 13 01:13:29.549033 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 01:13:29.551445 systemd[1]: Finished sshkeys.service. Mar 13 01:13:29.555786 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 01:13:29.556372 systemd[1]: Startup finished in 3.628s (kernel) + 18.140s (initrd) + 11.822s (userspace) = 33.591s. Mar 13 01:13:35.172544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 01:13:35.176561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:13:35.383698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:13:35.395045 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 01:13:35.462180 kubelet[1795]: E0313 01:13:35.462008 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 01:13:35.466630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 01:13:35.466898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 01:13:35.467742 systemd[1]: kubelet.service: Consumed 237ms CPU time, 108.4M memory peak. Mar 13 01:13:37.264398 systemd[1]: Started sshd@3-10.230.35.114:22-20.161.92.111:48028.service - OpenSSH per-connection server daemon (20.161.92.111:48028). Mar 13 01:13:37.757141 sshd[1803]: Accepted publickey for core from 20.161.92.111 port 48028 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:37.758747 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:37.765681 systemd-logind[1572]: New session 6 of user core. Mar 13 01:13:37.775494 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 01:13:38.033446 sshd[1806]: Connection closed by 20.161.92.111 port 48028 Mar 13 01:13:38.034170 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Mar 13 01:13:38.039763 systemd-logind[1572]: Session 6 logged out. Waiting for processes to exit. Mar 13 01:13:38.040179 systemd[1]: sshd@3-10.230.35.114:22-20.161.92.111:48028.service: Deactivated successfully. Mar 13 01:13:38.042582 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 01:13:38.044790 systemd-logind[1572]: Removed session 6. Mar 13 01:13:38.135374 systemd[1]: Started sshd@4-10.230.35.114:22-20.161.92.111:48042.service - OpenSSH per-connection server daemon (20.161.92.111:48042). Mar 13 01:13:38.638598 sshd[1812]: Accepted publickey for core from 20.161.92.111 port 48042 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:38.640223 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:38.647965 systemd-logind[1572]: New session 7 of user core. Mar 13 01:13:38.657599 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 01:13:38.905292 sshd[1815]: Connection closed by 20.161.92.111 port 48042 Mar 13 01:13:38.904225 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Mar 13 01:13:38.909617 systemd[1]: sshd@4-10.230.35.114:22-20.161.92.111:48042.service: Deactivated successfully. Mar 13 01:13:38.912035 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 01:13:38.913340 systemd-logind[1572]: Session 7 logged out. Waiting for processes to exit. Mar 13 01:13:38.915535 systemd-logind[1572]: Removed session 7. Mar 13 01:13:39.007435 systemd[1]: Started sshd@5-10.230.35.114:22-20.161.92.111:48048.service - OpenSSH per-connection server daemon (20.161.92.111:48048). Mar 13 01:13:39.511303 sshd[1821]: Accepted publickey for core from 20.161.92.111 port 48048 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:39.512914 sshd-session[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:39.520944 systemd-logind[1572]: New session 8 of user core. Mar 13 01:13:39.532528 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 01:13:39.781756 sshd[1824]: Connection closed by 20.161.92.111 port 48048 Mar 13 01:13:39.782576 sshd-session[1821]: pam_unix(sshd:session): session closed for user core Mar 13 01:13:39.788106 systemd[1]: sshd@5-10.230.35.114:22-20.161.92.111:48048.service: Deactivated successfully. Mar 13 01:13:39.790642 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 01:13:39.791901 systemd-logind[1572]: Session 8 logged out. Waiting for processes to exit. Mar 13 01:13:39.794020 systemd-logind[1572]: Removed session 8. Mar 13 01:13:39.884567 systemd[1]: Started sshd@6-10.230.35.114:22-20.161.92.111:48056.service - OpenSSH per-connection server daemon (20.161.92.111:48056). Mar 13 01:13:40.376584 sshd[1830]: Accepted publickey for core from 20.161.92.111 port 48056 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:40.378103 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:40.384922 systemd-logind[1572]: New session 9 of user core. Mar 13 01:13:40.389463 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 01:13:40.572842 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 01:13:40.573383 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 01:13:40.587955 sudo[1834]: pam_unix(sudo:session): session closed for user root Mar 13 01:13:40.676481 sshd[1833]: Connection closed by 20.161.92.111 port 48056 Mar 13 01:13:40.676863 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Mar 13 01:13:40.683798 systemd-logind[1572]: Session 9 logged out. Waiting for processes to exit. Mar 13 01:13:40.685131 systemd[1]: sshd@6-10.230.35.114:22-20.161.92.111:48056.service: Deactivated successfully. Mar 13 01:13:40.688127 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 01:13:40.691037 systemd-logind[1572]: Removed session 9. Mar 13 01:13:40.780832 systemd[1]: Started sshd@7-10.230.35.114:22-20.161.92.111:54248.service - OpenSSH per-connection server daemon (20.161.92.111:54248). Mar 13 01:13:41.304171 sshd[1840]: Accepted publickey for core from 20.161.92.111 port 54248 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:41.306032 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:41.312654 systemd-logind[1572]: New session 10 of user core. Mar 13 01:13:41.336537 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 01:13:41.493116 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 01:13:41.493785 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 01:13:41.502399 sudo[1845]: pam_unix(sudo:session): session closed for user root Mar 13 01:13:41.510392 sudo[1844]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 01:13:41.510823 sudo[1844]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 01:13:41.524726 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 01:13:41.573313 augenrules[1867]: No rules Mar 13 01:13:41.574648 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 01:13:41.575552 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 01:13:41.577289 sudo[1844]: pam_unix(sudo:session): session closed for user root Mar 13 01:13:41.667637 sshd[1843]: Connection closed by 20.161.92.111 port 54248 Mar 13 01:13:41.668583 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Mar 13 01:13:41.674437 systemd[1]: sshd@7-10.230.35.114:22-20.161.92.111:54248.service: Deactivated successfully. Mar 13 01:13:41.676866 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 01:13:41.678110 systemd-logind[1572]: Session 10 logged out. Waiting for processes to exit. Mar 13 01:13:41.680133 systemd-logind[1572]: Removed session 10. Mar 13 01:13:41.771934 systemd[1]: Started sshd@8-10.230.35.114:22-20.161.92.111:54254.service - OpenSSH per-connection server daemon (20.161.92.111:54254). Mar 13 01:13:42.289452 sshd[1876]: Accepted publickey for core from 20.161.92.111 port 54254 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:13:42.291415 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:13:42.300968 systemd-logind[1572]: New session 11 of user core. Mar 13 01:13:42.307505 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 01:13:42.479786 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 01:13:42.480952 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 01:13:43.003566 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 01:13:43.023884 (dockerd)[1898]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 01:13:43.400028 dockerd[1898]: time="2026-03-13T01:13:43.399856684Z" level=info msg="Starting up" Mar 13 01:13:43.403012 dockerd[1898]: time="2026-03-13T01:13:43.402980784Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 01:13:43.421869 dockerd[1898]: time="2026-03-13T01:13:43.421779013Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 01:13:43.444048 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2810675324-merged.mount: Deactivated successfully. Mar 13 01:13:43.474299 dockerd[1898]: time="2026-03-13T01:13:43.474010161Z" level=info msg="Loading containers: start." Mar 13 01:13:43.489346 kernel: Initializing XFRM netlink socket Mar 13 01:13:43.792624 systemd-timesyncd[1508]: Network configuration changed, trying to establish connection. Mar 13 01:13:43.852031 systemd-networkd[1493]: docker0: Link UP Mar 13 01:13:43.856330 dockerd[1898]: time="2026-03-13T01:13:43.856225521Z" level=info msg="Loading containers: done." Mar 13 01:13:43.879303 dockerd[1898]: time="2026-03-13T01:13:43.875074880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 01:13:43.879303 dockerd[1898]: time="2026-03-13T01:13:43.875169360Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 01:13:43.879303 dockerd[1898]: time="2026-03-13T01:13:43.877348668Z" level=info msg="Initializing buildkit" Mar 13 01:13:43.878029 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3539014238-merged.mount: Deactivated successfully. Mar 13 01:13:43.907351 dockerd[1898]: time="2026-03-13T01:13:43.907313031Z" level=info msg="Completed buildkit initialization" Mar 13 01:13:43.919369 dockerd[1898]: time="2026-03-13T01:13:43.918475995Z" level=info msg="Daemon has completed initialization" Mar 13 01:13:43.919369 dockerd[1898]: time="2026-03-13T01:13:43.918606663Z" level=info msg="API listen on /run/docker.sock" Mar 13 01:13:43.919841 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 01:13:44.582380 containerd[1592]: time="2026-03-13T01:13:44.582224094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 01:13:45.316594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800581667.mount: Deactivated successfully. Mar 13 01:13:45.717420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 01:13:45.721487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:13:45.809044 systemd-timesyncd[1508]: Contacted time server [2a03:b0c0:1:d0::b1d:6001]:123 (2.flatcar.pool.ntp.org). Mar 13 01:13:45.809172 systemd-timesyncd[1508]: Initial clock synchronization to Fri 2026-03-13 01:13:45.618943 UTC. Mar 13 01:13:45.964476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:13:45.976864 (kubelet)[2166]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 01:13:46.065697 kubelet[2166]: E0313 01:13:46.065578 2166 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 01:13:46.070072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 01:13:46.071243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 01:13:46.072018 systemd[1]: kubelet.service: Consumed 218ms CPU time, 108.7M memory peak. Mar 13 01:13:47.235167 containerd[1592]: time="2026-03-13T01:13:47.235060531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:47.237025 containerd[1592]: time="2026-03-13T01:13:47.236687493Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 13 01:13:47.237817 containerd[1592]: time="2026-03-13T01:13:47.237775438Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:47.242277 containerd[1592]: time="2026-03-13T01:13:47.242224218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:47.243656 containerd[1592]: time="2026-03-13T01:13:47.243605379Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.66120499s" Mar 13 01:13:47.243727 containerd[1592]: time="2026-03-13T01:13:47.243700291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 13 01:13:47.244636 containerd[1592]: time="2026-03-13T01:13:47.244586010Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 01:13:49.385598 containerd[1592]: time="2026-03-13T01:13:49.384372635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:49.385598 containerd[1592]: time="2026-03-13T01:13:49.385556790Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 13 01:13:49.386284 containerd[1592]: time="2026-03-13T01:13:49.386215918Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:49.389627 containerd[1592]: time="2026-03-13T01:13:49.389593363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:49.391191 containerd[1592]: time="2026-03-13T01:13:49.391127531Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.14650284s" Mar 13 01:13:49.391318 containerd[1592]: time="2026-03-13T01:13:49.391190231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 13 01:13:49.391983 containerd[1592]: time="2026-03-13T01:13:49.391951068Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 01:13:51.018300 containerd[1592]: time="2026-03-13T01:13:51.018091750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:51.019737 containerd[1592]: time="2026-03-13T01:13:51.019673195Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 13 01:13:51.020877 containerd[1592]: time="2026-03-13T01:13:51.020841104Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:51.025474 containerd[1592]: time="2026-03-13T01:13:51.025368516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:51.026977 containerd[1592]: time="2026-03-13T01:13:51.026846042Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.6348488s" Mar 13 01:13:51.026977 containerd[1592]: time="2026-03-13T01:13:51.026933697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 13 01:13:51.028575 containerd[1592]: time="2026-03-13T01:13:51.028502851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 01:13:52.895582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95549957.mount: Deactivated successfully. Mar 13 01:13:53.686442 containerd[1592]: time="2026-03-13T01:13:53.686258463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:53.688648 containerd[1592]: time="2026-03-13T01:13:53.688605158Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 13 01:13:53.689560 containerd[1592]: time="2026-03-13T01:13:53.689522182Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:53.693307 containerd[1592]: time="2026-03-13T01:13:53.693227786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:53.695188 containerd[1592]: time="2026-03-13T01:13:53.695103162Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.666558837s" Mar 13 01:13:53.695188 containerd[1592]: time="2026-03-13T01:13:53.695150369Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 13 01:13:53.695953 containerd[1592]: time="2026-03-13T01:13:53.695923335Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 01:13:54.257514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761691192.mount: Deactivated successfully. Mar 13 01:13:54.722831 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 01:13:55.922301 containerd[1592]: time="2026-03-13T01:13:55.922198705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:55.923633 containerd[1592]: time="2026-03-13T01:13:55.923573647Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 13 01:13:55.924783 containerd[1592]: time="2026-03-13T01:13:55.924693107Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:55.929057 containerd[1592]: time="2026-03-13T01:13:55.928632319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:13:55.930480 containerd[1592]: time="2026-03-13T01:13:55.930007301Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.234044652s" Mar 13 01:13:55.930480 containerd[1592]: time="2026-03-13T01:13:55.930049139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 13 01:13:55.931104 containerd[1592]: time="2026-03-13T01:13:55.931063472Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 01:13:56.259574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 01:13:56.262597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:13:56.506480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:13:56.519715 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 01:13:56.656856 kubelet[2265]: E0313 01:13:56.656766 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 01:13:56.659914 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 01:13:56.660174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 01:13:56.660957 systemd[1]: kubelet.service: Consumed 242ms CPU time, 109.5M memory peak. Mar 13 01:13:56.797090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963204935.mount: Deactivated successfully. Mar 13 01:13:56.806582 containerd[1592]: time="2026-03-13T01:13:56.806316989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 01:13:56.807246 containerd[1592]: time="2026-03-13T01:13:56.807215956Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 13 01:13:56.808050 containerd[1592]: time="2026-03-13T01:13:56.807984568Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 01:13:56.811924 containerd[1592]: time="2026-03-13T01:13:56.811859744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 01:13:56.812937 containerd[1592]: time="2026-03-13T01:13:56.812780275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 881.582243ms" Mar 13 01:13:56.812937 containerd[1592]: time="2026-03-13T01:13:56.812821400Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 13 01:13:56.813387 containerd[1592]: time="2026-03-13T01:13:56.813337705Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 01:13:57.425909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536524112.mount: Deactivated successfully. Mar 13 01:14:01.122942 containerd[1592]: time="2026-03-13T01:14:01.122818327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:14:01.124154 containerd[1592]: time="2026-03-13T01:14:01.124002793Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 13 01:14:01.126413 containerd[1592]: time="2026-03-13T01:14:01.125494335Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:14:01.129242 containerd[1592]: time="2026-03-13T01:14:01.129171408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:14:01.130881 containerd[1592]: time="2026-03-13T01:14:01.130684232Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 4.317295272s" Mar 13 01:14:01.130881 containerd[1592]: time="2026-03-13T01:14:01.130734880Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 13 01:14:05.374235 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:14:05.375048 systemd[1]: kubelet.service: Consumed 242ms CPU time, 109.5M memory peak. Mar 13 01:14:05.378259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:14:05.413468 systemd[1]: Reload requested from client PID 2367 ('systemctl') (unit session-11.scope)... Mar 13 01:14:05.413511 systemd[1]: Reloading... Mar 13 01:14:05.669301 zram_generator::config[2412]: No configuration found. Mar 13 01:14:06.025583 systemd[1]: Reloading finished in 611 ms. Mar 13 01:14:06.104949 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 01:14:06.105348 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 01:14:06.105974 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:14:06.106152 systemd[1]: kubelet.service: Consumed 148ms CPU time, 97.9M memory peak. Mar 13 01:14:06.108524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:14:06.303225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:14:06.317860 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 01:14:06.424568 kubelet[2479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:14:06.424568 kubelet[2479]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 01:14:06.424568 kubelet[2479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:14:06.425133 kubelet[2479]: I0313 01:14:06.424637 2479 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 01:14:06.977544 update_engine[1574]: I20260313 01:14:06.977416 1574 update_attempter.cc:509] Updating boot flags... Mar 13 01:14:06.987290 kubelet[2479]: I0313 01:14:06.986898 2479 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 01:14:06.987290 kubelet[2479]: I0313 01:14:06.986942 2479 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 01:14:06.987290 kubelet[2479]: I0313 01:14:06.987254 2479 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 01:14:07.047833 kubelet[2479]: E0313 01:14:07.047247 2479 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.35.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 01:14:07.055379 kubelet[2479]: I0313 01:14:07.053526 2479 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 01:14:07.082384 kubelet[2479]: I0313 01:14:07.082353 2479 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 01:14:07.110416 kubelet[2479]: I0313 01:14:07.109549 2479 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 01:14:07.121296 kubelet[2479]: I0313 01:14:07.120427 2479 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 01:14:07.126955 kubelet[2479]: I0313 01:14:07.120482 2479 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-1hh7x.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 01:14:07.127898 kubelet[2479]: I0313 01:14:07.127872 2479 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 01:14:07.128010 kubelet[2479]: I0313 01:14:07.127992 2479 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 01:14:07.128346 kubelet[2479]: I0313 01:14:07.128321 2479 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:14:07.135292 kubelet[2479]: I0313 01:14:07.135170 2479 kubelet.go:480] "Attempting to sync node with API server" Mar 13 01:14:07.135292 kubelet[2479]: I0313 01:14:07.135204 2479 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 01:14:07.135622 kubelet[2479]: I0313 01:14:07.135564 2479 kubelet.go:386] "Adding apiserver pod source" Mar 13 01:14:07.146772 kubelet[2479]: I0313 01:14:07.145664 2479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 01:14:07.151432 kubelet[2479]: E0313 01:14:07.151388 2479 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.35.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 01:14:07.162126 kubelet[2479]: E0313 01:14:07.162071 2479 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.35.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1hh7x.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 01:14:07.176594 kubelet[2479]: I0313 01:14:07.176559 2479 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 01:14:07.183120 kubelet[2479]: I0313 01:14:07.182721 2479 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 01:14:07.184072 kubelet[2479]: W0313 01:14:07.184038 2479 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 01:14:07.266595 kubelet[2479]: I0313 01:14:07.266566 2479 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 01:14:07.270288 kubelet[2479]: I0313 01:14:07.268808 2479 server.go:1289] "Started kubelet" Mar 13 01:14:07.275761 kubelet[2479]: I0313 01:14:07.275517 2479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 01:14:07.279442 kubelet[2479]: I0313 01:14:07.278973 2479 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 01:14:07.289050 kubelet[2479]: I0313 01:14:07.289025 2479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 01:14:07.294112 kubelet[2479]: I0313 01:14:07.294080 2479 server.go:317] "Adding debug handlers to kubelet server" Mar 13 01:14:07.295295 kubelet[2479]: I0313 01:14:07.294257 2479 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 01:14:07.295986 kubelet[2479]: I0313 01:14:07.284170 2479 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 01:14:07.296435 kubelet[2479]: E0313 01:14:07.284551 2479 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.35.114:6443/api/v1/namespaces/default/events\": dial tcp 10.230.35.114:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-1hh7x.gb1.brightbox.com.189c4191b0ad08e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-1hh7x.gb1.brightbox.com,UID:srv-1hh7x.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-1hh7x.gb1.brightbox.com,},FirstTimestamp:2026-03-13 01:14:07.267326179 +0000 UTC m=+0.944235684,LastTimestamp:2026-03-13 01:14:07.267326179 +0000 UTC m=+0.944235684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-1hh7x.gb1.brightbox.com,}" Mar 13 01:14:07.296572 kubelet[2479]: I0313 01:14:07.296481 2479 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 01:14:07.297599 kubelet[2479]: I0313 01:14:07.296625 2479 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 01:14:07.297599 kubelet[2479]: I0313 01:14:07.296728 2479 reconciler.go:26] "Reconciler: start to sync state" Mar 13 01:14:07.297599 kubelet[2479]: E0313 01:14:07.297250 2479 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.35.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 01:14:07.299547 kubelet[2479]: E0313 01:14:07.299361 2479 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 01:14:07.299768 kubelet[2479]: I0313 01:14:07.299739 2479 factory.go:223] Registration of the systemd container factory successfully Mar 13 01:14:07.299902 kubelet[2479]: I0313 01:14:07.299864 2479 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 01:14:07.301880 kubelet[2479]: I0313 01:14:07.301851 2479 factory.go:223] Registration of the containerd container factory successfully Mar 13 01:14:07.306569 kubelet[2479]: E0313 01:14:07.306528 2479 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" Mar 13 01:14:07.310613 kubelet[2479]: E0313 01:14:07.307612 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1hh7x.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.114:6443: connect: connection refused" interval="200ms" Mar 13 01:14:07.340585 kubelet[2479]: I0313 01:14:07.340555 2479 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 01:14:07.340803 kubelet[2479]: I0313 01:14:07.340781 2479 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 01:14:07.340970 kubelet[2479]: I0313 01:14:07.340940 2479 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:14:07.343610 kubelet[2479]: I0313 01:14:07.343591 2479 policy_none.go:49] "None policy: Start" Mar 13 01:14:07.343757 kubelet[2479]: I0313 01:14:07.343737 2479 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 01:14:07.343892 kubelet[2479]: I0313 01:14:07.343874 2479 state_mem.go:35] "Initializing new in-memory state store" Mar 13 01:14:07.353452 kubelet[2479]: I0313 01:14:07.353416 2479 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 01:14:07.355459 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 01:14:07.358377 kubelet[2479]: I0313 01:14:07.358349 2479 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 01:14:07.358472 kubelet[2479]: I0313 01:14:07.358385 2479 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 01:14:07.358472 kubelet[2479]: I0313 01:14:07.358450 2479 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 01:14:07.358472 kubelet[2479]: I0313 01:14:07.358471 2479 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 01:14:07.358609 kubelet[2479]: E0313 01:14:07.358535 2479 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 01:14:07.360224 kubelet[2479]: E0313 01:14:07.360189 2479 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.35.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 01:14:07.374507 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 01:14:07.397657 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 01:14:07.400518 kubelet[2479]: E0313 01:14:07.400450 2479 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 01:14:07.400787 kubelet[2479]: I0313 01:14:07.400752 2479 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 01:14:07.400871 kubelet[2479]: I0313 01:14:07.400788 2479 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 01:14:07.401786 kubelet[2479]: I0313 01:14:07.401729 2479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 01:14:07.404820 kubelet[2479]: E0313 01:14:07.404703 2479 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 01:14:07.404820 kubelet[2479]: E0313 01:14:07.404794 2479 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-1hh7x.gb1.brightbox.com\" not found" Mar 13 01:14:07.479498 systemd[1]: Created slice kubepods-burstable-pod383f780bf4179df2c1132b37b234030e.slice - libcontainer container kubepods-burstable-pod383f780bf4179df2c1132b37b234030e.slice. Mar 13 01:14:07.489514 kubelet[2479]: E0313 01:14:07.489472 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.494971 systemd[1]: Created slice kubepods-burstable-pod9463063047ab9d84214a96d89e49a4d0.slice - libcontainer container kubepods-burstable-pod9463063047ab9d84214a96d89e49a4d0.slice. Mar 13 01:14:07.500961 kubelet[2479]: E0313 01:14:07.500852 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.504823 kubelet[2479]: I0313 01:14:07.504748 2479 kubelet_node_status.go:75] "Attempting to register node" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.506129 systemd[1]: Created slice kubepods-burstable-pod972cd96f068b937fa0698400e896d35b.slice - libcontainer container kubepods-burstable-pod972cd96f068b937fa0698400e896d35b.slice. Mar 13 01:14:07.507790 kubelet[2479]: E0313 01:14:07.507748 2479 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.35.114:6443/api/v1/nodes\": dial tcp 10.230.35.114:6443: connect: connection refused" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.509353 kubelet[2479]: E0313 01:14:07.508433 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1hh7x.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.114:6443: connect: connection refused" interval="400ms" Mar 13 01:14:07.511204 kubelet[2479]: E0313 01:14:07.511177 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.598307 kubelet[2479]: I0313 01:14:07.598138 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/972cd96f068b937fa0698400e896d35b-kubeconfig\") pod \"kube-scheduler-srv-1hh7x.gb1.brightbox.com\" (UID: \"972cd96f068b937fa0698400e896d35b\") " pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.598551 kubelet[2479]: I0313 01:14:07.598511 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/383f780bf4179df2c1132b37b234030e-k8s-certs\") pod \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" (UID: \"383f780bf4179df2c1132b37b234030e\") " pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.598929 kubelet[2479]: I0313 01:14:07.598898 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/383f780bf4179df2c1132b37b234030e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" (UID: \"383f780bf4179df2c1132b37b234030e\") " pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.599448 kubelet[2479]: I0313 01:14:07.599281 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-flexvolume-dir\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.599743 kubelet[2479]: I0313 01:14:07.599716 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.600075 kubelet[2479]: I0313 01:14:07.600043 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/383f780bf4179df2c1132b37b234030e-ca-certs\") pod \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" (UID: \"383f780bf4179df2c1132b37b234030e\") " pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.600319 kubelet[2479]: I0313 01:14:07.600284 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-ca-certs\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.600616 kubelet[2479]: I0313 01:14:07.600549 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-k8s-certs\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.600750 kubelet[2479]: I0313 01:14:07.600716 2479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-kubeconfig\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.710677 kubelet[2479]: I0313 01:14:07.710362 2479 kubelet_node_status.go:75] "Attempting to register node" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.710982 kubelet[2479]: E0313 01:14:07.710929 2479 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.35.114:6443/api/v1/nodes\": dial tcp 10.230.35.114:6443: connect: connection refused" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:07.792257 containerd[1592]: time="2026-03-13T01:14:07.792139835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-1hh7x.gb1.brightbox.com,Uid:383f780bf4179df2c1132b37b234030e,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:07.807700 containerd[1592]: time="2026-03-13T01:14:07.807614263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-1hh7x.gb1.brightbox.com,Uid:9463063047ab9d84214a96d89e49a4d0,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:07.813714 containerd[1592]: time="2026-03-13T01:14:07.813478653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-1hh7x.gb1.brightbox.com,Uid:972cd96f068b937fa0698400e896d35b,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:07.910627 kubelet[2479]: E0313 01:14:07.910068 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1hh7x.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.114:6443: connect: connection refused" interval="800ms" Mar 13 01:14:07.927284 containerd[1592]: time="2026-03-13T01:14:07.926912991Z" level=info msg="connecting to shim 86043118780cefbf2781fa81fc8a622acaa04f153218804bd066b4a3a37a653a" address="unix:///run/containerd/s/27e9291b5358b23986222badb2e1e5bca0ceab07be25893a425c9ee805db895c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:07.930438 containerd[1592]: time="2026-03-13T01:14:07.930404524Z" level=info msg="connecting to shim 659e59529188db5ce83aa7bf28ef57e70c5d324f4598f53c4c98883386d73f5f" address="unix:///run/containerd/s/33d18a79a572a29a490a96a8f386d55e9c7382ec7e6a9912ca329d41a6392c56" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:07.931733 containerd[1592]: time="2026-03-13T01:14:07.931700193Z" level=info msg="connecting to shim 2c62c29e33ce26e0fa066b233635499c02a8b707474ad48b9964535ff3a97729" address="unix:///run/containerd/s/382a9bfd81a963c837858ee8b581febe9904df77a91dbae4fd2fdc9b2a3a4b89" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:08.092544 systemd[1]: Started cri-containerd-2c62c29e33ce26e0fa066b233635499c02a8b707474ad48b9964535ff3a97729.scope - libcontainer container 2c62c29e33ce26e0fa066b233635499c02a8b707474ad48b9964535ff3a97729. Mar 13 01:14:08.095637 systemd[1]: Started cri-containerd-659e59529188db5ce83aa7bf28ef57e70c5d324f4598f53c4c98883386d73f5f.scope - libcontainer container 659e59529188db5ce83aa7bf28ef57e70c5d324f4598f53c4c98883386d73f5f. Mar 13 01:14:08.099611 systemd[1]: Started cri-containerd-86043118780cefbf2781fa81fc8a622acaa04f153218804bd066b4a3a37a653a.scope - libcontainer container 86043118780cefbf2781fa81fc8a622acaa04f153218804bd066b4a3a37a653a. Mar 13 01:14:08.116154 kubelet[2479]: E0313 01:14:08.116103 2479 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.35.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1hh7x.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 01:14:08.117992 kubelet[2479]: I0313 01:14:08.117966 2479 kubelet_node_status.go:75] "Attempting to register node" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:08.118641 kubelet[2479]: E0313 01:14:08.118511 2479 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.35.114:6443/api/v1/nodes\": dial tcp 10.230.35.114:6443: connect: connection refused" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:08.207073 containerd[1592]: time="2026-03-13T01:14:08.206868146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-1hh7x.gb1.brightbox.com,Uid:383f780bf4179df2c1132b37b234030e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c62c29e33ce26e0fa066b233635499c02a8b707474ad48b9964535ff3a97729\"" Mar 13 01:14:08.214728 kubelet[2479]: E0313 01:14:08.214672 2479 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.35.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 01:14:08.223787 containerd[1592]: time="2026-03-13T01:14:08.223714066Z" level=info msg="CreateContainer within sandbox \"2c62c29e33ce26e0fa066b233635499c02a8b707474ad48b9964535ff3a97729\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 01:14:08.245828 containerd[1592]: time="2026-03-13T01:14:08.245095806Z" level=info msg="Container 4a78702fb6668ac8131553dffe559a54a74f8952cf3cfd6ccc39e6b09eef343e: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:08.249807 containerd[1592]: time="2026-03-13T01:14:08.249772777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-1hh7x.gb1.brightbox.com,Uid:972cd96f068b937fa0698400e896d35b,Namespace:kube-system,Attempt:0,} returns sandbox id \"86043118780cefbf2781fa81fc8a622acaa04f153218804bd066b4a3a37a653a\"" Mar 13 01:14:08.257137 containerd[1592]: time="2026-03-13T01:14:08.256955810Z" level=info msg="CreateContainer within sandbox \"86043118780cefbf2781fa81fc8a622acaa04f153218804bd066b4a3a37a653a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 01:14:08.268327 containerd[1592]: time="2026-03-13T01:14:08.268243385Z" level=info msg="Container 5799c389e57d39877098b48e7f6a380fb40450eeec60e528faf3ea70da4bac27: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:08.270437 containerd[1592]: time="2026-03-13T01:14:08.270359218Z" level=info msg="CreateContainer within sandbox \"2c62c29e33ce26e0fa066b233635499c02a8b707474ad48b9964535ff3a97729\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4a78702fb6668ac8131553dffe559a54a74f8952cf3cfd6ccc39e6b09eef343e\"" Mar 13 01:14:08.271608 containerd[1592]: time="2026-03-13T01:14:08.271561705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-1hh7x.gb1.brightbox.com,Uid:9463063047ab9d84214a96d89e49a4d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"659e59529188db5ce83aa7bf28ef57e70c5d324f4598f53c4c98883386d73f5f\"" Mar 13 01:14:08.277012 containerd[1592]: time="2026-03-13T01:14:08.276914039Z" level=info msg="StartContainer for \"4a78702fb6668ac8131553dffe559a54a74f8952cf3cfd6ccc39e6b09eef343e\"" Mar 13 01:14:08.287612 containerd[1592]: time="2026-03-13T01:14:08.287558558Z" level=info msg="connecting to shim 4a78702fb6668ac8131553dffe559a54a74f8952cf3cfd6ccc39e6b09eef343e" address="unix:///run/containerd/s/382a9bfd81a963c837858ee8b581febe9904df77a91dbae4fd2fdc9b2a3a4b89" protocol=ttrpc version=3 Mar 13 01:14:08.289538 containerd[1592]: time="2026-03-13T01:14:08.289488738Z" level=info msg="CreateContainer within sandbox \"86043118780cefbf2781fa81fc8a622acaa04f153218804bd066b4a3a37a653a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5799c389e57d39877098b48e7f6a380fb40450eeec60e528faf3ea70da4bac27\"" Mar 13 01:14:08.292092 containerd[1592]: time="2026-03-13T01:14:08.292016224Z" level=info msg="StartContainer for \"5799c389e57d39877098b48e7f6a380fb40450eeec60e528faf3ea70da4bac27\"" Mar 13 01:14:08.294094 containerd[1592]: time="2026-03-13T01:14:08.293991802Z" level=info msg="connecting to shim 5799c389e57d39877098b48e7f6a380fb40450eeec60e528faf3ea70da4bac27" address="unix:///run/containerd/s/27e9291b5358b23986222badb2e1e5bca0ceab07be25893a425c9ee805db895c" protocol=ttrpc version=3 Mar 13 01:14:08.300975 containerd[1592]: time="2026-03-13T01:14:08.300909365Z" level=info msg="CreateContainer within sandbox \"659e59529188db5ce83aa7bf28ef57e70c5d324f4598f53c4c98883386d73f5f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 01:14:08.309078 containerd[1592]: time="2026-03-13T01:14:08.308189003Z" level=info msg="Container 8835b68fcddb420dda8fe59cdd6db841772eece05c7748e974ed4def73a352ed: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:08.320221 containerd[1592]: time="2026-03-13T01:14:08.320171739Z" level=info msg="CreateContainer within sandbox \"659e59529188db5ce83aa7bf28ef57e70c5d324f4598f53c4c98883386d73f5f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8835b68fcddb420dda8fe59cdd6db841772eece05c7748e974ed4def73a352ed\"" Mar 13 01:14:08.321583 containerd[1592]: time="2026-03-13T01:14:08.321335808Z" level=info msg="StartContainer for \"8835b68fcddb420dda8fe59cdd6db841772eece05c7748e974ed4def73a352ed\"" Mar 13 01:14:08.326510 systemd[1]: Started cri-containerd-4a78702fb6668ac8131553dffe559a54a74f8952cf3cfd6ccc39e6b09eef343e.scope - libcontainer container 4a78702fb6668ac8131553dffe559a54a74f8952cf3cfd6ccc39e6b09eef343e. Mar 13 01:14:08.333602 containerd[1592]: time="2026-03-13T01:14:08.333536090Z" level=info msg="connecting to shim 8835b68fcddb420dda8fe59cdd6db841772eece05c7748e974ed4def73a352ed" address="unix:///run/containerd/s/33d18a79a572a29a490a96a8f386d55e9c7382ec7e6a9912ca329d41a6392c56" protocol=ttrpc version=3 Mar 13 01:14:08.344963 systemd[1]: Started cri-containerd-5799c389e57d39877098b48e7f6a380fb40450eeec60e528faf3ea70da4bac27.scope - libcontainer container 5799c389e57d39877098b48e7f6a380fb40450eeec60e528faf3ea70da4bac27. Mar 13 01:14:08.366690 systemd[1]: Started cri-containerd-8835b68fcddb420dda8fe59cdd6db841772eece05c7748e974ed4def73a352ed.scope - libcontainer container 8835b68fcddb420dda8fe59cdd6db841772eece05c7748e974ed4def73a352ed. Mar 13 01:14:08.483361 containerd[1592]: time="2026-03-13T01:14:08.483191893Z" level=info msg="StartContainer for \"4a78702fb6668ac8131553dffe559a54a74f8952cf3cfd6ccc39e6b09eef343e\" returns successfully" Mar 13 01:14:08.494113 containerd[1592]: time="2026-03-13T01:14:08.494065324Z" level=info msg="StartContainer for \"5799c389e57d39877098b48e7f6a380fb40450eeec60e528faf3ea70da4bac27\" returns successfully" Mar 13 01:14:08.520906 containerd[1592]: time="2026-03-13T01:14:08.518692536Z" level=info msg="StartContainer for \"8835b68fcddb420dda8fe59cdd6db841772eece05c7748e974ed4def73a352ed\" returns successfully" Mar 13 01:14:08.711727 kubelet[2479]: E0313 01:14:08.711669 2479 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.35.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1hh7x.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.35.114:6443: connect: connection refused" interval="1.6s" Mar 13 01:14:08.715026 kubelet[2479]: E0313 01:14:08.714982 2479 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.35.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.35.114:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 01:14:08.923304 kubelet[2479]: I0313 01:14:08.922100 2479 kubelet_node_status.go:75] "Attempting to register node" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:09.389505 kubelet[2479]: E0313 01:14:09.389263 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:09.394893 kubelet[2479]: E0313 01:14:09.394602 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:09.398285 kubelet[2479]: E0313 01:14:09.397465 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:10.401541 kubelet[2479]: E0313 01:14:10.400784 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:10.401541 kubelet[2479]: E0313 01:14:10.401317 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:10.402360 kubelet[2479]: E0313 01:14:10.402337 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:11.404403 kubelet[2479]: E0313 01:14:11.403992 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:11.404403 kubelet[2479]: E0313 01:14:11.404003 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:11.426530 kubelet[2479]: E0313 01:14:11.426313 2479 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.154927 kubelet[2479]: I0313 01:14:12.154869 2479 apiserver.go:52] "Watching apiserver" Mar 13 01:14:12.257295 kubelet[2479]: E0313 01:14:12.256588 2479 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-1hh7x.gb1.brightbox.com\" not found" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.297352 kubelet[2479]: I0313 01:14:12.297305 2479 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 01:14:12.389679 kubelet[2479]: I0313 01:14:12.389461 2479 kubelet_node_status.go:78] "Successfully registered node" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.389679 kubelet[2479]: E0313 01:14:12.389511 2479 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-1hh7x.gb1.brightbox.com\": node \"srv-1hh7x.gb1.brightbox.com\" not found" Mar 13 01:14:12.409000 kubelet[2479]: I0313 01:14:12.408389 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.416148 kubelet[2479]: I0313 01:14:12.415372 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.438306 kubelet[2479]: E0313 01:14:12.436136 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.438844 kubelet[2479]: E0313 01:14:12.438577 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1hh7x.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.438844 kubelet[2479]: I0313 01:14:12.438659 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.441501 kubelet[2479]: E0313 01:14:12.441472 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.441824 kubelet[2479]: I0313 01:14:12.441668 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:12.444889 kubelet[2479]: E0313 01:14:12.444860 2479 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:13.510232 kubelet[2479]: I0313 01:14:13.510004 2479 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:13.546970 kubelet[2479]: I0313 01:14:13.546818 2479 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 01:14:15.362736 systemd[1]: Reload requested from client PID 2775 ('systemctl') (unit session-11.scope)... Mar 13 01:14:15.363260 systemd[1]: Reloading... Mar 13 01:14:15.488349 zram_generator::config[2819]: No configuration found. Mar 13 01:14:15.872352 systemd[1]: Reloading finished in 508 ms. Mar 13 01:14:15.904406 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:14:15.923822 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 01:14:15.924330 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:14:15.924410 systemd[1]: kubelet.service: Consumed 1.288s CPU time, 127.5M memory peak. Mar 13 01:14:15.928785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 01:14:16.289349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 01:14:16.300759 (kubelet)[2884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 01:14:16.388225 kubelet[2884]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:14:16.390307 kubelet[2884]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 01:14:16.390307 kubelet[2884]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:14:16.390307 kubelet[2884]: I0313 01:14:16.389612 2884 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 01:14:16.404396 kubelet[2884]: I0313 01:14:16.404347 2884 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 01:14:16.404583 kubelet[2884]: I0313 01:14:16.404564 2884 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 01:14:16.405039 kubelet[2884]: I0313 01:14:16.405016 2884 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 01:14:16.407909 kubelet[2884]: I0313 01:14:16.407884 2884 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 01:14:16.443091 kubelet[2884]: I0313 01:14:16.442776 2884 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 01:14:16.453590 kubelet[2884]: I0313 01:14:16.453568 2884 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 01:14:16.461356 kubelet[2884]: I0313 01:14:16.459661 2884 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 01:14:16.461675 kubelet[2884]: I0313 01:14:16.461635 2884 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 01:14:16.462977 kubelet[2884]: I0313 01:14:16.462702 2884 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-1hh7x.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 01:14:16.462977 kubelet[2884]: I0313 01:14:16.462969 2884 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 01:14:16.462977 kubelet[2884]: I0313 01:14:16.462986 2884 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 01:14:16.463341 kubelet[2884]: I0313 01:14:16.463048 2884 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:14:16.464194 kubelet[2884]: I0313 01:14:16.463518 2884 kubelet.go:480] "Attempting to sync node with API server" Mar 13 01:14:16.464194 kubelet[2884]: I0313 01:14:16.463544 2884 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 01:14:16.464194 kubelet[2884]: I0313 01:14:16.463575 2884 kubelet.go:386] "Adding apiserver pod source" Mar 13 01:14:16.464194 kubelet[2884]: I0313 01:14:16.463590 2884 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 01:14:16.468051 kubelet[2884]: I0313 01:14:16.467937 2884 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 01:14:16.470197 kubelet[2884]: I0313 01:14:16.469075 2884 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 01:14:16.489446 sudo[2898]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 13 01:14:16.489945 sudo[2898]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 13 01:14:16.492581 kubelet[2884]: I0313 01:14:16.491608 2884 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 01:14:16.492581 kubelet[2884]: I0313 01:14:16.491753 2884 server.go:1289] "Started kubelet" Mar 13 01:14:16.495303 kubelet[2884]: I0313 01:14:16.494490 2884 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 01:14:16.513020 kubelet[2884]: I0313 01:14:16.512933 2884 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 01:14:16.514136 kubelet[2884]: I0313 01:14:16.514109 2884 server.go:317] "Adding debug handlers to kubelet server" Mar 13 01:14:16.517143 kubelet[2884]: I0313 01:14:16.517062 2884 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 01:14:16.517897 kubelet[2884]: I0313 01:14:16.517872 2884 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 01:14:16.518236 kubelet[2884]: I0313 01:14:16.518210 2884 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 01:14:16.527624 kubelet[2884]: I0313 01:14:16.527566 2884 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 01:14:16.532357 kubelet[2884]: I0313 01:14:16.532240 2884 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 01:14:16.533380 kubelet[2884]: I0313 01:14:16.533252 2884 reconciler.go:26] "Reconciler: start to sync state" Mar 13 01:14:16.539113 kubelet[2884]: I0313 01:14:16.538835 2884 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 01:14:16.541861 kubelet[2884]: E0313 01:14:16.541762 2884 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 01:14:16.548938 kubelet[2884]: I0313 01:14:16.548905 2884 factory.go:223] Registration of the containerd container factory successfully Mar 13 01:14:16.549112 kubelet[2884]: I0313 01:14:16.549046 2884 factory.go:223] Registration of the systemd container factory successfully Mar 13 01:14:16.576904 kubelet[2884]: I0313 01:14:16.576851 2884 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 01:14:16.593424 kubelet[2884]: I0313 01:14:16.593221 2884 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 01:14:16.595378 kubelet[2884]: I0313 01:14:16.593261 2884 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 01:14:16.595378 kubelet[2884]: I0313 01:14:16.594937 2884 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 01:14:16.595378 kubelet[2884]: I0313 01:14:16.594951 2884 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 01:14:16.595378 kubelet[2884]: E0313 01:14:16.595009 2884 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 01:14:16.679335 kubelet[2884]: I0313 01:14:16.678997 2884 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 01:14:16.679335 kubelet[2884]: I0313 01:14:16.679024 2884 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 01:14:16.679335 kubelet[2884]: I0313 01:14:16.679049 2884 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:14:16.680113 kubelet[2884]: I0313 01:14:16.679257 2884 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 01:14:16.680113 kubelet[2884]: I0313 01:14:16.679806 2884 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 01:14:16.680113 kubelet[2884]: I0313 01:14:16.679851 2884 policy_none.go:49] "None policy: Start" Mar 13 01:14:16.680113 kubelet[2884]: I0313 01:14:16.679866 2884 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 01:14:16.680113 kubelet[2884]: I0313 01:14:16.679883 2884 state_mem.go:35] "Initializing new in-memory state store" Mar 13 01:14:16.680113 kubelet[2884]: I0313 01:14:16.680023 2884 state_mem.go:75] "Updated machine memory state" Mar 13 01:14:16.690123 kubelet[2884]: E0313 01:14:16.690079 2884 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 01:14:16.692237 kubelet[2884]: I0313 01:14:16.692197 2884 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 01:14:16.693447 kubelet[2884]: I0313 01:14:16.693134 2884 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 01:14:16.695245 kubelet[2884]: I0313 01:14:16.695223 2884 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 01:14:16.697940 kubelet[2884]: E0313 01:14:16.697676 2884 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 01:14:16.698193 kubelet[2884]: I0313 01:14:16.697378 2884 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.700685 kubelet[2884]: I0313 01:14:16.700647 2884 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.706538 kubelet[2884]: I0313 01:14:16.704492 2884 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.726182 kubelet[2884]: I0313 01:14:16.726043 2884 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 01:14:16.726933 kubelet[2884]: E0313 01:14:16.726617 2884 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1hh7x.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.731790 kubelet[2884]: I0313 01:14:16.731565 2884 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 01:14:16.733760 kubelet[2884]: I0313 01:14:16.733712 2884 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 01:14:16.734200 kubelet[2884]: I0313 01:14:16.734174 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/383f780bf4179df2c1132b37b234030e-k8s-certs\") pod \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" (UID: \"383f780bf4179df2c1132b37b234030e\") " pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.736305 kubelet[2884]: I0313 01:14:16.736089 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/383f780bf4179df2c1132b37b234030e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" (UID: \"383f780bf4179df2c1132b37b234030e\") " pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.736305 kubelet[2884]: I0313 01:14:16.736136 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-ca-certs\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.736305 kubelet[2884]: I0313 01:14:16.736179 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-k8s-certs\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.736305 kubelet[2884]: I0313 01:14:16.736224 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-kubeconfig\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.736305 kubelet[2884]: I0313 01:14:16.736253 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/383f780bf4179df2c1132b37b234030e-ca-certs\") pod \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" (UID: \"383f780bf4179df2c1132b37b234030e\") " pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.736959 kubelet[2884]: I0313 01:14:16.736827 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-flexvolume-dir\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.737230 kubelet[2884]: I0313 01:14:16.737080 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9463063047ab9d84214a96d89e49a4d0-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-1hh7x.gb1.brightbox.com\" (UID: \"9463063047ab9d84214a96d89e49a4d0\") " pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.737230 kubelet[2884]: I0313 01:14:16.737127 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/972cd96f068b937fa0698400e896d35b-kubeconfig\") pod \"kube-scheduler-srv-1hh7x.gb1.brightbox.com\" (UID: \"972cd96f068b937fa0698400e896d35b\") " pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.830903 kubelet[2884]: I0313 01:14:16.828829 2884 kubelet_node_status.go:75] "Attempting to register node" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.868241 kubelet[2884]: I0313 01:14:16.868189 2884 kubelet_node_status.go:124] "Node was previously registered" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:16.868800 kubelet[2884]: I0313 01:14:16.868331 2884 kubelet_node_status.go:78] "Successfully registered node" node="srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:17.146368 sudo[2898]: pam_unix(sudo:session): session closed for user root Mar 13 01:14:17.465883 kubelet[2884]: I0313 01:14:17.465669 2884 apiserver.go:52] "Watching apiserver" Mar 13 01:14:17.533643 kubelet[2884]: I0313 01:14:17.533583 2884 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 01:14:17.642885 kubelet[2884]: I0313 01:14:17.641927 2884 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:17.643233 kubelet[2884]: I0313 01:14:17.643211 2884 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:17.657890 kubelet[2884]: I0313 01:14:17.657315 2884 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 01:14:17.657890 kubelet[2884]: E0313 01:14:17.657388 2884 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1hh7x.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:17.657890 kubelet[2884]: I0313 01:14:17.657395 2884 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 13 01:14:17.657890 kubelet[2884]: E0313 01:14:17.657522 2884 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-1hh7x.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" Mar 13 01:14:17.679808 kubelet[2884]: I0313 01:14:17.679430 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-1hh7x.gb1.brightbox.com" podStartSLOduration=4.679406284 podStartE2EDuration="4.679406284s" podCreationTimestamp="2026-03-13 01:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:14:17.677549861 +0000 UTC m=+1.368281248" watchObservedRunningTime="2026-03-13 01:14:17.679406284 +0000 UTC m=+1.370137655" Mar 13 01:14:17.693277 kubelet[2884]: I0313 01:14:17.692985 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-1hh7x.gb1.brightbox.com" podStartSLOduration=1.692973493 podStartE2EDuration="1.692973493s" podCreationTimestamp="2026-03-13 01:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:14:17.692562988 +0000 UTC m=+1.383294376" watchObservedRunningTime="2026-03-13 01:14:17.692973493 +0000 UTC m=+1.383704865" Mar 13 01:14:19.313320 sudo[1880]: pam_unix(sudo:session): session closed for user root Mar 13 01:14:19.402272 sshd[1879]: Connection closed by 20.161.92.111 port 54254 Mar 13 01:14:19.404904 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Mar 13 01:14:19.412062 systemd[1]: sshd@8-10.230.35.114:22-20.161.92.111:54254.service: Deactivated successfully. Mar 13 01:14:19.417233 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 01:14:19.417734 systemd[1]: session-11.scope: Consumed 7.183s CPU time, 217.2M memory peak. Mar 13 01:14:19.420564 systemd-logind[1572]: Session 11 logged out. Waiting for processes to exit. Mar 13 01:14:19.423229 systemd-logind[1572]: Removed session 11. Mar 13 01:14:21.079167 kubelet[2884]: I0313 01:14:21.079113 2884 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 01:14:21.081099 containerd[1592]: time="2026-03-13T01:14:21.080201048Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 01:14:21.081492 kubelet[2884]: I0313 01:14:21.080771 2884 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 01:14:21.487349 kubelet[2884]: I0313 01:14:21.487046 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-1hh7x.gb1.brightbox.com" podStartSLOduration=5.487009816 podStartE2EDuration="5.487009816s" podCreationTimestamp="2026-03-13 01:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:14:17.711000073 +0000 UTC m=+1.401731455" watchObservedRunningTime="2026-03-13 01:14:21.487009816 +0000 UTC m=+5.177741168" Mar 13 01:14:22.008911 systemd[1]: Created slice kubepods-besteffort-podb524c06c_75cb_4b22_b853_56cddd1b63c7.slice - libcontainer container kubepods-besteffort-podb524c06c_75cb_4b22_b853_56cddd1b63c7.slice. Mar 13 01:14:22.032785 systemd[1]: Created slice kubepods-burstable-podb5c3832e_bd72_469f_b5dc_0001a17cf6b0.slice - libcontainer container kubepods-burstable-podb5c3832e_bd72_469f_b5dc_0001a17cf6b0.slice. Mar 13 01:14:22.071029 kubelet[2884]: I0313 01:14:22.070977 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b524c06c-75cb-4b22-b853-56cddd1b63c7-lib-modules\") pod \"kube-proxy-gmv87\" (UID: \"b524c06c-75cb-4b22-b853-56cddd1b63c7\") " pod="kube-system/kube-proxy-gmv87" Mar 13 01:14:22.071029 kubelet[2884]: I0313 01:14:22.071039 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-etc-cni-netd\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.071263 kubelet[2884]: I0313 01:14:22.071068 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-config-path\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.071263 kubelet[2884]: I0313 01:14:22.071102 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-net\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.071263 kubelet[2884]: I0313 01:14:22.071130 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-kernel\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.071263 kubelet[2884]: I0313 01:14:22.071179 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hubble-tls\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.071263 kubelet[2884]: I0313 01:14:22.071206 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85rr\" (UniqueName: \"kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-kube-api-access-q85rr\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072593 kubelet[2884]: I0313 01:14:22.071236 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b524c06c-75cb-4b22-b853-56cddd1b63c7-kube-proxy\") pod \"kube-proxy-gmv87\" (UID: \"b524c06c-75cb-4b22-b853-56cddd1b63c7\") " pod="kube-system/kube-proxy-gmv87" Mar 13 01:14:22.072593 kubelet[2884]: I0313 01:14:22.071260 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b524c06c-75cb-4b22-b853-56cddd1b63c7-xtables-lock\") pod \"kube-proxy-gmv87\" (UID: \"b524c06c-75cb-4b22-b853-56cddd1b63c7\") " pod="kube-system/kube-proxy-gmv87" Mar 13 01:14:22.072593 kubelet[2884]: I0313 01:14:22.071315 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb22l\" (UniqueName: \"kubernetes.io/projected/b524c06c-75cb-4b22-b853-56cddd1b63c7-kube-api-access-kb22l\") pod \"kube-proxy-gmv87\" (UID: \"b524c06c-75cb-4b22-b853-56cddd1b63c7\") " pod="kube-system/kube-proxy-gmv87" Mar 13 01:14:22.072593 kubelet[2884]: I0313 01:14:22.071341 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hostproc\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072593 kubelet[2884]: I0313 01:14:22.071369 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-cgroup\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072593 kubelet[2884]: I0313 01:14:22.071392 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cni-path\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072843 kubelet[2884]: I0313 01:14:22.071415 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-lib-modules\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072843 kubelet[2884]: I0313 01:14:22.071440 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-xtables-lock\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072843 kubelet[2884]: I0313 01:14:22.071465 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-clustermesh-secrets\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072843 kubelet[2884]: I0313 01:14:22.071528 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-run\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.072843 kubelet[2884]: I0313 01:14:22.071562 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-bpf-maps\") pod \"cilium-8jwns\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " pod="kube-system/cilium-8jwns" Mar 13 01:14:22.311696 systemd[1]: Created slice kubepods-besteffort-podf6912676_8faa_4d91_ba8a_9fd858e089d2.slice - libcontainer container kubepods-besteffort-podf6912676_8faa_4d91_ba8a_9fd858e089d2.slice. Mar 13 01:14:22.325594 containerd[1592]: time="2026-03-13T01:14:22.325527191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmv87,Uid:b524c06c-75cb-4b22-b853-56cddd1b63c7,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:22.339309 containerd[1592]: time="2026-03-13T01:14:22.339213313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jwns,Uid:b5c3832e-bd72-469f-b5dc-0001a17cf6b0,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:22.367986 containerd[1592]: time="2026-03-13T01:14:22.367749174Z" level=info msg="connecting to shim 662fc635781999ee8f08fda47fb12d90bad8b7c02d59e6fa3617badb760e0eb2" address="unix:///run/containerd/s/3cfdcad163a69fd1e88e17c5066b71ac38a23ed23c58a3eb581d1e42cb719829" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:22.375856 kubelet[2884]: I0313 01:14:22.375803 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6912676-8faa-4d91-ba8a-9fd858e089d2-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-m4lqn\" (UID: \"f6912676-8faa-4d91-ba8a-9fd858e089d2\") " pod="kube-system/cilium-operator-6c4d7847fc-m4lqn" Mar 13 01:14:22.378454 kubelet[2884]: I0313 01:14:22.375873 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4l2m\" (UniqueName: \"kubernetes.io/projected/f6912676-8faa-4d91-ba8a-9fd858e089d2-kube-api-access-w4l2m\") pod \"cilium-operator-6c4d7847fc-m4lqn\" (UID: \"f6912676-8faa-4d91-ba8a-9fd858e089d2\") " pod="kube-system/cilium-operator-6c4d7847fc-m4lqn" Mar 13 01:14:22.390541 containerd[1592]: time="2026-03-13T01:14:22.390469852Z" level=info msg="connecting to shim 4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26" address="unix:///run/containerd/s/9184da5852b73359147110baa2b4b58cab510e9664c13f1a1e619ab5c03dda3c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:22.453503 systemd[1]: Started cri-containerd-4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26.scope - libcontainer container 4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26. Mar 13 01:14:22.456144 systemd[1]: Started cri-containerd-662fc635781999ee8f08fda47fb12d90bad8b7c02d59e6fa3617badb760e0eb2.scope - libcontainer container 662fc635781999ee8f08fda47fb12d90bad8b7c02d59e6fa3617badb760e0eb2. Mar 13 01:14:22.535727 containerd[1592]: time="2026-03-13T01:14:22.535679381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8jwns,Uid:b5c3832e-bd72-469f-b5dc-0001a17cf6b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\"" Mar 13 01:14:22.540393 containerd[1592]: time="2026-03-13T01:14:22.540336143Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 13 01:14:22.552250 containerd[1592]: time="2026-03-13T01:14:22.552203638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gmv87,Uid:b524c06c-75cb-4b22-b853-56cddd1b63c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"662fc635781999ee8f08fda47fb12d90bad8b7c02d59e6fa3617badb760e0eb2\"" Mar 13 01:14:22.558777 containerd[1592]: time="2026-03-13T01:14:22.558736363Z" level=info msg="CreateContainer within sandbox \"662fc635781999ee8f08fda47fb12d90bad8b7c02d59e6fa3617badb760e0eb2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 01:14:22.588430 containerd[1592]: time="2026-03-13T01:14:22.587528466Z" level=info msg="Container 880a4d74115d633281ddbcaeefcd2ba5466cab8871d151c6f631bccb42292948: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:22.596849 containerd[1592]: time="2026-03-13T01:14:22.596260733Z" level=info msg="CreateContainer within sandbox \"662fc635781999ee8f08fda47fb12d90bad8b7c02d59e6fa3617badb760e0eb2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"880a4d74115d633281ddbcaeefcd2ba5466cab8871d151c6f631bccb42292948\"" Mar 13 01:14:22.598302 containerd[1592]: time="2026-03-13T01:14:22.598039030Z" level=info msg="StartContainer for \"880a4d74115d633281ddbcaeefcd2ba5466cab8871d151c6f631bccb42292948\"" Mar 13 01:14:22.600081 containerd[1592]: time="2026-03-13T01:14:22.600033419Z" level=info msg="connecting to shim 880a4d74115d633281ddbcaeefcd2ba5466cab8871d151c6f631bccb42292948" address="unix:///run/containerd/s/3cfdcad163a69fd1e88e17c5066b71ac38a23ed23c58a3eb581d1e42cb719829" protocol=ttrpc version=3 Mar 13 01:14:22.619193 containerd[1592]: time="2026-03-13T01:14:22.619142756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m4lqn,Uid:f6912676-8faa-4d91-ba8a-9fd858e089d2,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:22.631507 systemd[1]: Started cri-containerd-880a4d74115d633281ddbcaeefcd2ba5466cab8871d151c6f631bccb42292948.scope - libcontainer container 880a4d74115d633281ddbcaeefcd2ba5466cab8871d151c6f631bccb42292948. Mar 13 01:14:22.672250 containerd[1592]: time="2026-03-13T01:14:22.672181036Z" level=info msg="connecting to shim cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c" address="unix:///run/containerd/s/4427e5fdc4a5ea5079d4eef9f9f750eacb7464fc126171e6538c82cd1d0df1cb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:22.708492 systemd[1]: Started cri-containerd-cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c.scope - libcontainer container cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c. Mar 13 01:14:22.742070 containerd[1592]: time="2026-03-13T01:14:22.742005319Z" level=info msg="StartContainer for \"880a4d74115d633281ddbcaeefcd2ba5466cab8871d151c6f631bccb42292948\" returns successfully" Mar 13 01:14:22.821413 containerd[1592]: time="2026-03-13T01:14:22.821298227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-m4lqn,Uid:f6912676-8faa-4d91-ba8a-9fd858e089d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\"" Mar 13 01:14:23.705180 kubelet[2884]: I0313 01:14:23.705063 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gmv87" podStartSLOduration=2.705032326 podStartE2EDuration="2.705032326s" podCreationTimestamp="2026-03-13 01:14:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:14:23.684566277 +0000 UTC m=+7.375297685" watchObservedRunningTime="2026-03-13 01:14:23.705032326 +0000 UTC m=+7.395763690" Mar 13 01:14:34.601163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458656307.mount: Deactivated successfully. Mar 13 01:14:37.903038 containerd[1592]: time="2026-03-13T01:14:37.902944028Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:14:37.905831 containerd[1592]: time="2026-03-13T01:14:37.905791257Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 13 01:14:37.907288 containerd[1592]: time="2026-03-13T01:14:37.906895937Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:14:37.913645 containerd[1592]: time="2026-03-13T01:14:37.913590801Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 15.373086782s" Mar 13 01:14:37.913779 containerd[1592]: time="2026-03-13T01:14:37.913750678Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 13 01:14:37.915588 containerd[1592]: time="2026-03-13T01:14:37.915545655Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 13 01:14:37.935843 containerd[1592]: time="2026-03-13T01:14:37.935431071Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 01:14:37.993258 containerd[1592]: time="2026-03-13T01:14:37.993213421Z" level=info msg="Container aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:37.995159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159118664.mount: Deactivated successfully. Mar 13 01:14:38.005284 containerd[1592]: time="2026-03-13T01:14:38.005232133Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\"" Mar 13 01:14:38.006387 containerd[1592]: time="2026-03-13T01:14:38.006121758Z" level=info msg="StartContainer for \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\"" Mar 13 01:14:38.007865 containerd[1592]: time="2026-03-13T01:14:38.007742442Z" level=info msg="connecting to shim aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702" address="unix:///run/containerd/s/9184da5852b73359147110baa2b4b58cab510e9664c13f1a1e619ab5c03dda3c" protocol=ttrpc version=3 Mar 13 01:14:38.041521 systemd[1]: Started cri-containerd-aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702.scope - libcontainer container aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702. Mar 13 01:14:38.090039 containerd[1592]: time="2026-03-13T01:14:38.089918626Z" level=info msg="StartContainer for \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\" returns successfully" Mar 13 01:14:38.106452 systemd[1]: cri-containerd-aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702.scope: Deactivated successfully. Mar 13 01:14:38.158134 containerd[1592]: time="2026-03-13T01:14:38.157984781Z" level=info msg="received container exit event container_id:\"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\" id:\"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\" pid:3306 exited_at:{seconds:1773364478 nanos:111766558}" Mar 13 01:14:38.198370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702-rootfs.mount: Deactivated successfully. Mar 13 01:14:38.740468 containerd[1592]: time="2026-03-13T01:14:38.740380956Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 01:14:38.751924 containerd[1592]: time="2026-03-13T01:14:38.751262407Z" level=info msg="Container 8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:38.759958 containerd[1592]: time="2026-03-13T01:14:38.759924569Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\"" Mar 13 01:14:38.761316 containerd[1592]: time="2026-03-13T01:14:38.761246570Z" level=info msg="StartContainer for \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\"" Mar 13 01:14:38.763412 containerd[1592]: time="2026-03-13T01:14:38.763329578Z" level=info msg="connecting to shim 8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81" address="unix:///run/containerd/s/9184da5852b73359147110baa2b4b58cab510e9664c13f1a1e619ab5c03dda3c" protocol=ttrpc version=3 Mar 13 01:14:38.792933 systemd[1]: Started cri-containerd-8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81.scope - libcontainer container 8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81. Mar 13 01:14:38.841254 containerd[1592]: time="2026-03-13T01:14:38.841204184Z" level=info msg="StartContainer for \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\" returns successfully" Mar 13 01:14:38.871550 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 01:14:38.872885 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 01:14:38.873444 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 13 01:14:38.876899 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 01:14:38.881042 systemd[1]: cri-containerd-8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81.scope: Deactivated successfully. Mar 13 01:14:38.883441 containerd[1592]: time="2026-03-13T01:14:38.883302222Z" level=info msg="received container exit event container_id:\"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\" id:\"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\" pid:3350 exited_at:{seconds:1773364478 nanos:880961489}" Mar 13 01:14:38.916247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 01:14:39.759784 containerd[1592]: time="2026-03-13T01:14:39.759702555Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 01:14:39.768335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056320074.mount: Deactivated successfully. Mar 13 01:14:39.827201 containerd[1592]: time="2026-03-13T01:14:39.826914948Z" level=info msg="Container b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:39.837526 containerd[1592]: time="2026-03-13T01:14:39.837463194Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\"" Mar 13 01:14:39.838363 containerd[1592]: time="2026-03-13T01:14:39.838315532Z" level=info msg="StartContainer for \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\"" Mar 13 01:14:39.840836 containerd[1592]: time="2026-03-13T01:14:39.840764193Z" level=info msg="connecting to shim b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771" address="unix:///run/containerd/s/9184da5852b73359147110baa2b4b58cab510e9664c13f1a1e619ab5c03dda3c" protocol=ttrpc version=3 Mar 13 01:14:39.869496 systemd[1]: Started cri-containerd-b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771.scope - libcontainer container b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771. Mar 13 01:14:39.992306 containerd[1592]: time="2026-03-13T01:14:39.990392247Z" level=info msg="StartContainer for \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\" returns successfully" Mar 13 01:14:39.996254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923232741.mount: Deactivated successfully. Mar 13 01:14:40.000360 containerd[1592]: time="2026-03-13T01:14:40.000125148Z" level=info msg="received container exit event container_id:\"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\" id:\"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\" pid:3401 exited_at:{seconds:1773364479 nanos:999837794}" Mar 13 01:14:40.000913 systemd[1]: cri-containerd-b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771.scope: Deactivated successfully. Mar 13 01:14:40.032893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771-rootfs.mount: Deactivated successfully. Mar 13 01:14:40.787804 containerd[1592]: time="2026-03-13T01:14:40.787674603Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 01:14:40.810189 containerd[1592]: time="2026-03-13T01:14:40.809866352Z" level=info msg="Container d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:40.834427 containerd[1592]: time="2026-03-13T01:14:40.834336286Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\"" Mar 13 01:14:40.837686 containerd[1592]: time="2026-03-13T01:14:40.836595593Z" level=info msg="StartContainer for \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\"" Mar 13 01:14:40.838016 containerd[1592]: time="2026-03-13T01:14:40.837966478Z" level=info msg="connecting to shim d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a" address="unix:///run/containerd/s/9184da5852b73359147110baa2b4b58cab510e9664c13f1a1e619ab5c03dda3c" protocol=ttrpc version=3 Mar 13 01:14:40.868494 systemd[1]: Started cri-containerd-d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a.scope - libcontainer container d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a. Mar 13 01:14:40.911226 systemd[1]: cri-containerd-d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a.scope: Deactivated successfully. Mar 13 01:14:40.919641 containerd[1592]: time="2026-03-13T01:14:40.919574225Z" level=info msg="received container exit event container_id:\"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\" id:\"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\" pid:3444 exited_at:{seconds:1773364480 nanos:912615376}" Mar 13 01:14:40.933358 containerd[1592]: time="2026-03-13T01:14:40.933319828Z" level=info msg="StartContainer for \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\" returns successfully" Mar 13 01:14:40.991729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a-rootfs.mount: Deactivated successfully. Mar 13 01:14:41.802164 containerd[1592]: time="2026-03-13T01:14:41.801711883Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 01:14:41.841497 containerd[1592]: time="2026-03-13T01:14:41.841444000Z" level=info msg="Container c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:41.843833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2395950303.mount: Deactivated successfully. Mar 13 01:14:41.865846 containerd[1592]: time="2026-03-13T01:14:41.865600109Z" level=info msg="CreateContainer within sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\"" Mar 13 01:14:41.867680 containerd[1592]: time="2026-03-13T01:14:41.867428406Z" level=info msg="StartContainer for \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\"" Mar 13 01:14:41.872036 containerd[1592]: time="2026-03-13T01:14:41.871957018Z" level=info msg="connecting to shim c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313" address="unix:///run/containerd/s/9184da5852b73359147110baa2b4b58cab510e9664c13f1a1e619ab5c03dda3c" protocol=ttrpc version=3 Mar 13 01:14:41.916500 systemd[1]: Started cri-containerd-c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313.scope - libcontainer container c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313. Mar 13 01:14:42.040402 containerd[1592]: time="2026-03-13T01:14:42.040349727Z" level=info msg="StartContainer for \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" returns successfully" Mar 13 01:14:42.327062 containerd[1592]: time="2026-03-13T01:14:42.325708652Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:14:42.340075 containerd[1592]: time="2026-03-13T01:14:42.339384544Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 13 01:14:42.348407 containerd[1592]: time="2026-03-13T01:14:42.348335729Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 01:14:42.361212 containerd[1592]: time="2026-03-13T01:14:42.360950672Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.443951009s" Mar 13 01:14:42.361212 containerd[1592]: time="2026-03-13T01:14:42.361000699Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 13 01:14:42.368621 containerd[1592]: time="2026-03-13T01:14:42.368552705Z" level=info msg="CreateContainer within sandbox \"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 13 01:14:42.390441 containerd[1592]: time="2026-03-13T01:14:42.390400624Z" level=info msg="Container a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:42.401770 kubelet[2884]: I0313 01:14:42.401729 2884 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 01:14:42.405761 containerd[1592]: time="2026-03-13T01:14:42.405665937Z" level=info msg="CreateContainer within sandbox \"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\"" Mar 13 01:14:42.407143 containerd[1592]: time="2026-03-13T01:14:42.406286441Z" level=info msg="StartContainer for \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\"" Mar 13 01:14:42.408063 containerd[1592]: time="2026-03-13T01:14:42.408013003Z" level=info msg="connecting to shim a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e" address="unix:///run/containerd/s/4427e5fdc4a5ea5079d4eef9f9f750eacb7464fc126171e6538c82cd1d0df1cb" protocol=ttrpc version=3 Mar 13 01:14:42.450711 systemd[1]: Started cri-containerd-a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e.scope - libcontainer container a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e. Mar 13 01:14:42.486848 systemd[1]: Created slice kubepods-burstable-pod377d17d4_f6fe_4e5f_94dc_b46855d0fd74.slice - libcontainer container kubepods-burstable-pod377d17d4_f6fe_4e5f_94dc_b46855d0fd74.slice. Mar 13 01:14:42.495880 systemd[1]: Created slice kubepods-burstable-podebda3901_f74b_4345_a63f_eb8f006128e2.slice - libcontainer container kubepods-burstable-podebda3901_f74b_4345_a63f_eb8f006128e2.slice. Mar 13 01:14:42.526740 kubelet[2884]: I0313 01:14:42.526694 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ckfm\" (UniqueName: \"kubernetes.io/projected/377d17d4-f6fe-4e5f-94dc-b46855d0fd74-kube-api-access-8ckfm\") pod \"coredns-674b8bbfcf-9fwtr\" (UID: \"377d17d4-f6fe-4e5f-94dc-b46855d0fd74\") " pod="kube-system/coredns-674b8bbfcf-9fwtr" Mar 13 01:14:42.527133 kubelet[2884]: I0313 01:14:42.526956 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebda3901-f74b-4345-a63f-eb8f006128e2-config-volume\") pod \"coredns-674b8bbfcf-shhjp\" (UID: \"ebda3901-f74b-4345-a63f-eb8f006128e2\") " pod="kube-system/coredns-674b8bbfcf-shhjp" Mar 13 01:14:42.527133 kubelet[2884]: I0313 01:14:42.527026 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zlf8\" (UniqueName: \"kubernetes.io/projected/ebda3901-f74b-4345-a63f-eb8f006128e2-kube-api-access-8zlf8\") pod \"coredns-674b8bbfcf-shhjp\" (UID: \"ebda3901-f74b-4345-a63f-eb8f006128e2\") " pod="kube-system/coredns-674b8bbfcf-shhjp" Mar 13 01:14:42.527133 kubelet[2884]: I0313 01:14:42.527073 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/377d17d4-f6fe-4e5f-94dc-b46855d0fd74-config-volume\") pod \"coredns-674b8bbfcf-9fwtr\" (UID: \"377d17d4-f6fe-4e5f-94dc-b46855d0fd74\") " pod="kube-system/coredns-674b8bbfcf-9fwtr" Mar 13 01:14:42.562930 containerd[1592]: time="2026-03-13T01:14:42.562879246Z" level=info msg="StartContainer for \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" returns successfully" Mar 13 01:14:42.798688 containerd[1592]: time="2026-03-13T01:14:42.798184037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9fwtr,Uid:377d17d4-f6fe-4e5f-94dc-b46855d0fd74,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:42.807235 containerd[1592]: time="2026-03-13T01:14:42.807195164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shhjp,Uid:ebda3901-f74b-4345-a63f-eb8f006128e2,Namespace:kube-system,Attempt:0,}" Mar 13 01:14:42.892902 kubelet[2884]: I0313 01:14:42.892503 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8jwns" podStartSLOduration=6.517188468 podStartE2EDuration="21.892473856s" podCreationTimestamp="2026-03-13 01:14:21 +0000 UTC" firstStartedPulling="2026-03-13 01:14:22.539919572 +0000 UTC m=+6.230650930" lastFinishedPulling="2026-03-13 01:14:37.915204967 +0000 UTC m=+21.605936318" observedRunningTime="2026-03-13 01:14:42.886698551 +0000 UTC m=+26.577429952" watchObservedRunningTime="2026-03-13 01:14:42.892473856 +0000 UTC m=+26.583205233" Mar 13 01:14:45.465541 systemd-networkd[1493]: cilium_host: Link UP Mar 13 01:14:45.465850 systemd-networkd[1493]: cilium_net: Link UP Mar 13 01:14:45.466187 systemd-networkd[1493]: cilium_net: Gained carrier Mar 13 01:14:45.467699 systemd-networkd[1493]: cilium_host: Gained carrier Mar 13 01:14:45.639174 systemd-networkd[1493]: cilium_vxlan: Link UP Mar 13 01:14:45.639186 systemd-networkd[1493]: cilium_vxlan: Gained carrier Mar 13 01:14:46.073505 systemd-networkd[1493]: cilium_net: Gained IPv6LL Mar 13 01:14:46.328565 kernel: NET: Registered PF_ALG protocol family Mar 13 01:14:46.330011 systemd-networkd[1493]: cilium_host: Gained IPv6LL Mar 13 01:14:46.777476 systemd-networkd[1493]: cilium_vxlan: Gained IPv6LL Mar 13 01:14:47.494726 systemd-networkd[1493]: lxc_health: Link UP Mar 13 01:14:47.503380 systemd-networkd[1493]: lxc_health: Gained carrier Mar 13 01:14:47.997421 systemd-networkd[1493]: lxcbc649b1968c7: Link UP Mar 13 01:14:48.005330 kernel: eth0: renamed from tmp6096d Mar 13 01:14:48.010584 systemd-networkd[1493]: lxcbc649b1968c7: Gained carrier Mar 13 01:14:48.032293 kernel: eth0: renamed from tmp7d8c9 Mar 13 01:14:48.036056 systemd-networkd[1493]: lxc3598278e656b: Link UP Mar 13 01:14:48.044754 systemd-networkd[1493]: lxc3598278e656b: Gained carrier Mar 13 01:14:48.429015 kubelet[2884]: I0313 01:14:48.427015 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-m4lqn" podStartSLOduration=6.885509435 podStartE2EDuration="26.425723957s" podCreationTimestamp="2026-03-13 01:14:22 +0000 UTC" firstStartedPulling="2026-03-13 01:14:22.822731675 +0000 UTC m=+6.513463028" lastFinishedPulling="2026-03-13 01:14:42.362946192 +0000 UTC m=+26.053677550" observedRunningTime="2026-03-13 01:14:42.954478237 +0000 UTC m=+26.645209608" watchObservedRunningTime="2026-03-13 01:14:48.425723957 +0000 UTC m=+32.116455333" Mar 13 01:14:49.209671 systemd-networkd[1493]: lxc_health: Gained IPv6LL Mar 13 01:14:49.785525 systemd-networkd[1493]: lxc3598278e656b: Gained IPv6LL Mar 13 01:14:49.913520 systemd-networkd[1493]: lxcbc649b1968c7: Gained IPv6LL Mar 13 01:14:54.017024 containerd[1592]: time="2026-03-13T01:14:54.016860282Z" level=info msg="connecting to shim 7d8c976cbf9986690215279965aafa8d018cc3020e55c278116892002a2da253" address="unix:///run/containerd/s/d4224a5bdbeeaaacb07172a2a602d9e2f7ce35638cff8625238639ba667b4782" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:54.022924 containerd[1592]: time="2026-03-13T01:14:54.022400871Z" level=info msg="connecting to shim 6096d3afe72867420b5f7b92b2c94facf9686cd57040114f33d52df642e157e6" address="unix:///run/containerd/s/2cb363e75e46d0dca216e83cf7d8d315ee52c88dcf08da6bb23c08bbb8f44970" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:14:54.116647 systemd[1]: Started cri-containerd-7d8c976cbf9986690215279965aafa8d018cc3020e55c278116892002a2da253.scope - libcontainer container 7d8c976cbf9986690215279965aafa8d018cc3020e55c278116892002a2da253. Mar 13 01:14:54.129677 systemd[1]: Started cri-containerd-6096d3afe72867420b5f7b92b2c94facf9686cd57040114f33d52df642e157e6.scope - libcontainer container 6096d3afe72867420b5f7b92b2c94facf9686cd57040114f33d52df642e157e6. Mar 13 01:14:54.230695 containerd[1592]: time="2026-03-13T01:14:54.230647708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-shhjp,Uid:ebda3901-f74b-4345-a63f-eb8f006128e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d8c976cbf9986690215279965aafa8d018cc3020e55c278116892002a2da253\"" Mar 13 01:14:54.237235 containerd[1592]: time="2026-03-13T01:14:54.237111898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9fwtr,Uid:377d17d4-f6fe-4e5f-94dc-b46855d0fd74,Namespace:kube-system,Attempt:0,} returns sandbox id \"6096d3afe72867420b5f7b92b2c94facf9686cd57040114f33d52df642e157e6\"" Mar 13 01:14:54.244039 containerd[1592]: time="2026-03-13T01:14:54.243736716Z" level=info msg="CreateContainer within sandbox \"7d8c976cbf9986690215279965aafa8d018cc3020e55c278116892002a2da253\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 01:14:54.247299 containerd[1592]: time="2026-03-13T01:14:54.247060740Z" level=info msg="CreateContainer within sandbox \"6096d3afe72867420b5f7b92b2c94facf9686cd57040114f33d52df642e157e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 01:14:54.268986 containerd[1592]: time="2026-03-13T01:14:54.268440631Z" level=info msg="Container 084fb276640ac4d5e500190246daf61acf5b1c9bd19254163e09a710706b2497: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:54.272314 containerd[1592]: time="2026-03-13T01:14:54.272211160Z" level=info msg="Container 775477c6100e45b1561b98151324e426c9cc32aaa878a5e8ca8ed41366448fd7: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:14:54.315833 containerd[1592]: time="2026-03-13T01:14:54.315613605Z" level=info msg="CreateContainer within sandbox \"7d8c976cbf9986690215279965aafa8d018cc3020e55c278116892002a2da253\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"084fb276640ac4d5e500190246daf61acf5b1c9bd19254163e09a710706b2497\"" Mar 13 01:14:54.316208 containerd[1592]: time="2026-03-13T01:14:54.316145641Z" level=info msg="CreateContainer within sandbox \"6096d3afe72867420b5f7b92b2c94facf9686cd57040114f33d52df642e157e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"775477c6100e45b1561b98151324e426c9cc32aaa878a5e8ca8ed41366448fd7\"" Mar 13 01:14:54.316593 containerd[1592]: time="2026-03-13T01:14:54.316553048Z" level=info msg="StartContainer for \"084fb276640ac4d5e500190246daf61acf5b1c9bd19254163e09a710706b2497\"" Mar 13 01:14:54.317224 containerd[1592]: time="2026-03-13T01:14:54.317189022Z" level=info msg="StartContainer for \"775477c6100e45b1561b98151324e426c9cc32aaa878a5e8ca8ed41366448fd7\"" Mar 13 01:14:54.319241 containerd[1592]: time="2026-03-13T01:14:54.319202237Z" level=info msg="connecting to shim 775477c6100e45b1561b98151324e426c9cc32aaa878a5e8ca8ed41366448fd7" address="unix:///run/containerd/s/2cb363e75e46d0dca216e83cf7d8d315ee52c88dcf08da6bb23c08bbb8f44970" protocol=ttrpc version=3 Mar 13 01:14:54.320080 containerd[1592]: time="2026-03-13T01:14:54.319992786Z" level=info msg="connecting to shim 084fb276640ac4d5e500190246daf61acf5b1c9bd19254163e09a710706b2497" address="unix:///run/containerd/s/d4224a5bdbeeaaacb07172a2a602d9e2f7ce35638cff8625238639ba667b4782" protocol=ttrpc version=3 Mar 13 01:14:54.355686 systemd[1]: Started cri-containerd-775477c6100e45b1561b98151324e426c9cc32aaa878a5e8ca8ed41366448fd7.scope - libcontainer container 775477c6100e45b1561b98151324e426c9cc32aaa878a5e8ca8ed41366448fd7. Mar 13 01:14:54.372611 systemd[1]: Started cri-containerd-084fb276640ac4d5e500190246daf61acf5b1c9bd19254163e09a710706b2497.scope - libcontainer container 084fb276640ac4d5e500190246daf61acf5b1c9bd19254163e09a710706b2497. Mar 13 01:14:54.449003 containerd[1592]: time="2026-03-13T01:14:54.448845087Z" level=info msg="StartContainer for \"775477c6100e45b1561b98151324e426c9cc32aaa878a5e8ca8ed41366448fd7\" returns successfully" Mar 13 01:14:54.468045 containerd[1592]: time="2026-03-13T01:14:54.467962411Z" level=info msg="StartContainer for \"084fb276640ac4d5e500190246daf61acf5b1c9bd19254163e09a710706b2497\" returns successfully" Mar 13 01:14:54.874604 kubelet[2884]: I0313 01:14:54.874321 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9fwtr" podStartSLOduration=32.874085371 podStartE2EDuration="32.874085371s" podCreationTimestamp="2026-03-13 01:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:14:54.871304625 +0000 UTC m=+38.562036027" watchObservedRunningTime="2026-03-13 01:14:54.874085371 +0000 UTC m=+38.564816738" Mar 13 01:14:54.895418 kubelet[2884]: I0313 01:14:54.895321 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-shhjp" podStartSLOduration=32.895300471 podStartE2EDuration="32.895300471s" podCreationTimestamp="2026-03-13 01:14:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:14:54.891011546 +0000 UTC m=+38.581742937" watchObservedRunningTime="2026-03-13 01:14:54.895300471 +0000 UTC m=+38.586031841" Mar 13 01:14:54.988765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890025331.mount: Deactivated successfully. Mar 13 01:15:27.762678 systemd[1]: Started sshd@9-10.230.35.114:22-20.161.92.111:51982.service - OpenSSH per-connection server daemon (20.161.92.111:51982). Mar 13 01:15:28.339371 sshd[4206]: Accepted publickey for core from 20.161.92.111 port 51982 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:15:28.341784 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:15:28.360404 systemd-logind[1572]: New session 12 of user core. Mar 13 01:15:28.364544 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 01:15:29.197397 sshd[4209]: Connection closed by 20.161.92.111 port 51982 Mar 13 01:15:29.197838 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Mar 13 01:15:29.203561 systemd[1]: sshd@9-10.230.35.114:22-20.161.92.111:51982.service: Deactivated successfully. Mar 13 01:15:29.206648 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 01:15:29.212509 systemd-logind[1572]: Session 12 logged out. Waiting for processes to exit. Mar 13 01:15:29.214213 systemd-logind[1572]: Removed session 12. Mar 13 01:15:34.307126 systemd[1]: Started sshd@10-10.230.35.114:22-20.161.92.111:41100.service - OpenSSH per-connection server daemon (20.161.92.111:41100). Mar 13 01:15:34.847635 sshd[4222]: Accepted publickey for core from 20.161.92.111 port 41100 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:15:34.849525 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:15:34.859348 systemd-logind[1572]: New session 13 of user core. Mar 13 01:15:34.866492 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 01:15:35.251080 sshd[4225]: Connection closed by 20.161.92.111 port 41100 Mar 13 01:15:35.251954 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Mar 13 01:15:35.257762 systemd[1]: sshd@10-10.230.35.114:22-20.161.92.111:41100.service: Deactivated successfully. Mar 13 01:15:35.260924 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 01:15:35.262676 systemd-logind[1572]: Session 13 logged out. Waiting for processes to exit. Mar 13 01:15:35.264859 systemd-logind[1572]: Removed session 13. Mar 13 01:15:40.355710 systemd[1]: Started sshd@11-10.230.35.114:22-20.161.92.111:53756.service - OpenSSH per-connection server daemon (20.161.92.111:53756). Mar 13 01:15:40.872311 sshd[4238]: Accepted publickey for core from 20.161.92.111 port 53756 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:15:40.873723 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:15:40.881809 systemd-logind[1572]: New session 14 of user core. Mar 13 01:15:40.894546 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 01:15:41.249676 sshd[4241]: Connection closed by 20.161.92.111 port 53756 Mar 13 01:15:41.251128 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Mar 13 01:15:41.262441 systemd[1]: sshd@11-10.230.35.114:22-20.161.92.111:53756.service: Deactivated successfully. Mar 13 01:15:41.266898 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 01:15:41.268464 systemd-logind[1572]: Session 14 logged out. Waiting for processes to exit. Mar 13 01:15:41.270743 systemd-logind[1572]: Removed session 14. Mar 13 01:15:42.013305 update_engine[1574]: I20260313 01:15:42.012912 1574 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 13 01:15:42.013305 update_engine[1574]: I20260313 01:15:42.013020 1574 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 13 01:15:42.017112 update_engine[1574]: I20260313 01:15:42.016788 1574 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 13 01:15:42.017770 update_engine[1574]: I20260313 01:15:42.017735 1574 omaha_request_params.cc:62] Current group set to stable Mar 13 01:15:42.019291 update_engine[1574]: I20260313 01:15:42.018375 1574 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 13 01:15:42.019291 update_engine[1574]: I20260313 01:15:42.018404 1574 update_attempter.cc:643] Scheduling an action processor start. Mar 13 01:15:42.019291 update_engine[1574]: I20260313 01:15:42.018464 1574 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 01:15:42.019291 update_engine[1574]: I20260313 01:15:42.018554 1574 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 13 01:15:42.019291 update_engine[1574]: I20260313 01:15:42.018664 1574 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 01:15:42.019291 update_engine[1574]: I20260313 01:15:42.018683 1574 omaha_request_action.cc:272] Request: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: Mar 13 01:15:42.019291 update_engine[1574]: I20260313 01:15:42.018694 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 01:15:42.024905 update_engine[1574]: I20260313 01:15:42.024868 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 01:15:42.025851 update_engine[1574]: I20260313 01:15:42.025792 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 01:15:42.034928 locksmithd[1610]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 13 01:15:42.043200 update_engine[1574]: E20260313 01:15:42.042972 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 01:15:42.043433 update_engine[1574]: I20260313 01:15:42.043380 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 13 01:15:46.358653 systemd[1]: Started sshd@12-10.230.35.114:22-20.161.92.111:53768.service - OpenSSH per-connection server daemon (20.161.92.111:53768). Mar 13 01:15:46.885824 sshd[4254]: Accepted publickey for core from 20.161.92.111 port 53768 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:15:46.887641 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:15:46.900509 systemd-logind[1572]: New session 15 of user core. Mar 13 01:15:46.909645 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 01:15:47.303556 sshd[4257]: Connection closed by 20.161.92.111 port 53768 Mar 13 01:15:47.303431 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Mar 13 01:15:47.311766 systemd[1]: sshd@12-10.230.35.114:22-20.161.92.111:53768.service: Deactivated successfully. Mar 13 01:15:47.315197 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 01:15:47.318437 systemd-logind[1572]: Session 15 logged out. Waiting for processes to exit. Mar 13 01:15:47.321967 systemd-logind[1572]: Removed session 15. Mar 13 01:15:47.412709 systemd[1]: Started sshd@13-10.230.35.114:22-20.161.92.111:53784.service - OpenSSH per-connection server daemon (20.161.92.111:53784). Mar 13 01:15:47.910206 sshd[4269]: Accepted publickey for core from 20.161.92.111 port 53784 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:15:47.912566 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:15:47.921337 systemd-logind[1572]: New session 16 of user core. Mar 13 01:15:47.926491 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 01:15:48.365658 sshd[4272]: Connection closed by 20.161.92.111 port 53784 Mar 13 01:15:48.364847 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Mar 13 01:15:48.370712 systemd-logind[1572]: Session 16 logged out. Waiting for processes to exit. Mar 13 01:15:48.371192 systemd[1]: sshd@13-10.230.35.114:22-20.161.92.111:53784.service: Deactivated successfully. Mar 13 01:15:48.374659 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 01:15:48.377934 systemd-logind[1572]: Removed session 16. Mar 13 01:15:48.465912 systemd[1]: Started sshd@14-10.230.35.114:22-20.161.92.111:53792.service - OpenSSH per-connection server daemon (20.161.92.111:53792). Mar 13 01:15:48.960452 sshd[4282]: Accepted publickey for core from 20.161.92.111 port 53792 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:15:48.961252 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:15:48.968366 systemd-logind[1572]: New session 17 of user core. Mar 13 01:15:48.978569 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 01:15:49.339602 sshd[4287]: Connection closed by 20.161.92.111 port 53792 Mar 13 01:15:49.341582 sshd-session[4282]: pam_unix(sshd:session): session closed for user core Mar 13 01:15:49.347812 systemd[1]: sshd@14-10.230.35.114:22-20.161.92.111:53792.service: Deactivated successfully. Mar 13 01:15:49.351172 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 01:15:49.356207 systemd-logind[1572]: Session 17 logged out. Waiting for processes to exit. Mar 13 01:15:49.358118 systemd-logind[1572]: Removed session 17. Mar 13 01:15:51.968439 update_engine[1574]: I20260313 01:15:51.967960 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 01:15:51.968439 update_engine[1574]: I20260313 01:15:51.968115 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 01:15:51.969066 update_engine[1574]: I20260313 01:15:51.968709 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 01:15:51.969402 update_engine[1574]: E20260313 01:15:51.969357 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 01:15:51.969560 update_engine[1574]: I20260313 01:15:51.969511 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 13 01:15:54.447617 systemd[1]: Started sshd@15-10.230.35.114:22-20.161.92.111:52078.service - OpenSSH per-connection server daemon (20.161.92.111:52078). Mar 13 01:15:54.945491 sshd[4303]: Accepted publickey for core from 20.161.92.111 port 52078 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:15:54.947177 sshd-session[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:15:54.954342 systemd-logind[1572]: New session 18 of user core. Mar 13 01:15:54.959532 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 01:15:55.314378 sshd[4306]: Connection closed by 20.161.92.111 port 52078 Mar 13 01:15:55.315249 sshd-session[4303]: pam_unix(sshd:session): session closed for user core Mar 13 01:15:55.321353 systemd[1]: sshd@15-10.230.35.114:22-20.161.92.111:52078.service: Deactivated successfully. Mar 13 01:15:55.324194 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 01:15:55.326098 systemd-logind[1572]: Session 18 logged out. Waiting for processes to exit. Mar 13 01:15:55.328105 systemd-logind[1572]: Removed session 18. Mar 13 01:16:00.416178 systemd[1]: Started sshd@16-10.230.35.114:22-20.161.92.111:47050.service - OpenSSH per-connection server daemon (20.161.92.111:47050). Mar 13 01:16:00.918049 sshd[4318]: Accepted publickey for core from 20.161.92.111 port 47050 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:00.919989 sshd-session[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:00.927190 systemd-logind[1572]: New session 19 of user core. Mar 13 01:16:00.938596 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 01:16:01.394341 sshd[4321]: Connection closed by 20.161.92.111 port 47050 Mar 13 01:16:01.395109 sshd-session[4318]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:01.402363 systemd[1]: sshd@16-10.230.35.114:22-20.161.92.111:47050.service: Deactivated successfully. Mar 13 01:16:01.406138 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 01:16:01.408347 systemd-logind[1572]: Session 19 logged out. Waiting for processes to exit. Mar 13 01:16:01.410535 systemd-logind[1572]: Removed session 19. Mar 13 01:16:01.503977 systemd[1]: Started sshd@17-10.230.35.114:22-20.161.92.111:47062.service - OpenSSH per-connection server daemon (20.161.92.111:47062). Mar 13 01:16:01.966353 update_engine[1574]: I20260313 01:16:01.965902 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 01:16:01.966353 update_engine[1574]: I20260313 01:16:01.966053 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 01:16:01.967019 update_engine[1574]: I20260313 01:16:01.966694 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 01:16:01.967446 update_engine[1574]: E20260313 01:16:01.967397 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 01:16:01.967529 update_engine[1574]: I20260313 01:16:01.967501 1574 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 13 01:16:02.005606 sshd[4333]: Accepted publickey for core from 20.161.92.111 port 47062 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:02.007285 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:02.015287 systemd-logind[1572]: New session 20 of user core. Mar 13 01:16:02.022540 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 01:16:02.613480 sshd[4336]: Connection closed by 20.161.92.111 port 47062 Mar 13 01:16:02.613724 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:02.619719 systemd[1]: sshd@17-10.230.35.114:22-20.161.92.111:47062.service: Deactivated successfully. Mar 13 01:16:02.622940 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 01:16:02.624992 systemd-logind[1572]: Session 20 logged out. Waiting for processes to exit. Mar 13 01:16:02.627042 systemd-logind[1572]: Removed session 20. Mar 13 01:16:02.714226 systemd[1]: Started sshd@18-10.230.35.114:22-20.161.92.111:47070.service - OpenSSH per-connection server daemon (20.161.92.111:47070). Mar 13 01:16:03.258208 sshd[4346]: Accepted publickey for core from 20.161.92.111 port 47070 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:03.259970 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:03.266867 systemd-logind[1572]: New session 21 of user core. Mar 13 01:16:03.274539 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 01:16:04.338843 sshd[4349]: Connection closed by 20.161.92.111 port 47070 Mar 13 01:16:04.339779 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:04.346505 systemd-logind[1572]: Session 21 logged out. Waiting for processes to exit. Mar 13 01:16:04.347531 systemd[1]: sshd@18-10.230.35.114:22-20.161.92.111:47070.service: Deactivated successfully. Mar 13 01:16:04.351683 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 01:16:04.355091 systemd-logind[1572]: Removed session 21. Mar 13 01:16:04.478344 systemd[1]: Started sshd@19-10.230.35.114:22-20.161.92.111:47072.service - OpenSSH per-connection server daemon (20.161.92.111:47072). Mar 13 01:16:04.996888 sshd[4365]: Accepted publickey for core from 20.161.92.111 port 47072 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:04.999040 sshd-session[4365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:05.007165 systemd-logind[1572]: New session 22 of user core. Mar 13 01:16:05.020211 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 01:16:05.580975 sshd[4368]: Connection closed by 20.161.92.111 port 47072 Mar 13 01:16:05.582973 sshd-session[4365]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:05.589824 systemd-logind[1572]: Session 22 logged out. Waiting for processes to exit. Mar 13 01:16:05.591151 systemd[1]: sshd@19-10.230.35.114:22-20.161.92.111:47072.service: Deactivated successfully. Mar 13 01:16:05.595189 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 01:16:05.598462 systemd-logind[1572]: Removed session 22. Mar 13 01:16:05.681332 systemd[1]: Started sshd@20-10.230.35.114:22-20.161.92.111:47082.service - OpenSSH per-connection server daemon (20.161.92.111:47082). Mar 13 01:16:06.192126 sshd[4377]: Accepted publickey for core from 20.161.92.111 port 47082 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:06.194458 sshd-session[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:06.202468 systemd-logind[1572]: New session 23 of user core. Mar 13 01:16:06.210619 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 01:16:06.568646 sshd[4380]: Connection closed by 20.161.92.111 port 47082 Mar 13 01:16:06.569560 sshd-session[4377]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:06.574907 systemd[1]: sshd@20-10.230.35.114:22-20.161.92.111:47082.service: Deactivated successfully. Mar 13 01:16:06.578250 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 01:16:06.580173 systemd-logind[1572]: Session 23 logged out. Waiting for processes to exit. Mar 13 01:16:06.583003 systemd-logind[1572]: Removed session 23. Mar 13 01:16:11.670927 systemd[1]: Started sshd@21-10.230.35.114:22-20.161.92.111:45546.service - OpenSSH per-connection server daemon (20.161.92.111:45546). Mar 13 01:16:11.971047 update_engine[1574]: I20260313 01:16:11.970804 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 01:16:11.971047 update_engine[1574]: I20260313 01:16:11.970920 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 01:16:11.971636 update_engine[1574]: I20260313 01:16:11.971428 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 01:16:11.972121 update_engine[1574]: E20260313 01:16:11.972074 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 01:16:11.972208 update_engine[1574]: I20260313 01:16:11.972172 1574 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 01:16:11.972208 update_engine[1574]: I20260313 01:16:11.972190 1574 omaha_request_action.cc:617] Omaha request response: Mar 13 01:16:11.972342 update_engine[1574]: E20260313 01:16:11.972316 1574 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.975976 1574 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976007 1574 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976020 1574 update_attempter.cc:306] Processing Done. Mar 13 01:16:11.976775 update_engine[1574]: E20260313 01:16:11.976040 1574 update_attempter.cc:619] Update failed. Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976063 1574 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976074 1574 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976084 1574 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976186 1574 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976223 1574 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976250 1574 omaha_request_action.cc:272] Request: Mar 13 01:16:11.976775 update_engine[1574]: Mar 13 01:16:11.976775 update_engine[1574]: Mar 13 01:16:11.976775 update_engine[1574]: Mar 13 01:16:11.976775 update_engine[1574]: Mar 13 01:16:11.976775 update_engine[1574]: Mar 13 01:16:11.976775 update_engine[1574]: Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976281 1574 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 13 01:16:11.976775 update_engine[1574]: I20260313 01:16:11.976316 1574 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 13 01:16:11.977597 update_engine[1574]: I20260313 01:16:11.976691 1574 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 13 01:16:11.978326 update_engine[1574]: E20260313 01:16:11.978055 1574 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 13 01:16:11.978326 update_engine[1574]: I20260313 01:16:11.978137 1574 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 13 01:16:11.978326 update_engine[1574]: I20260313 01:16:11.978156 1574 omaha_request_action.cc:617] Omaha request response: Mar 13 01:16:11.978326 update_engine[1574]: I20260313 01:16:11.978168 1574 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 01:16:11.978326 update_engine[1574]: I20260313 01:16:11.978178 1574 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 13 01:16:11.978326 update_engine[1574]: I20260313 01:16:11.978188 1574 update_attempter.cc:306] Processing Done. Mar 13 01:16:11.978326 update_engine[1574]: I20260313 01:16:11.978197 1574 update_attempter.cc:310] Error event sent. Mar 13 01:16:11.978326 update_engine[1574]: I20260313 01:16:11.978210 1574 update_check_scheduler.cc:74] Next update check in 42m59s Mar 13 01:16:11.980353 locksmithd[1610]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 13 01:16:11.980353 locksmithd[1610]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 13 01:16:12.196082 sshd[4394]: Accepted publickey for core from 20.161.92.111 port 45546 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:12.198017 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:12.205082 systemd-logind[1572]: New session 24 of user core. Mar 13 01:16:12.213498 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 01:16:12.555352 sshd[4397]: Connection closed by 20.161.92.111 port 45546 Mar 13 01:16:12.555850 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:12.561777 systemd[1]: sshd@21-10.230.35.114:22-20.161.92.111:45546.service: Deactivated successfully. Mar 13 01:16:12.565883 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 01:16:12.567812 systemd-logind[1572]: Session 24 logged out. Waiting for processes to exit. Mar 13 01:16:12.570023 systemd-logind[1572]: Removed session 24. Mar 13 01:16:17.667941 systemd[1]: Started sshd@22-10.230.35.114:22-20.161.92.111:45562.service - OpenSSH per-connection server daemon (20.161.92.111:45562). Mar 13 01:16:18.218434 sshd[4411]: Accepted publickey for core from 20.161.92.111 port 45562 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:18.219799 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:18.227092 systemd-logind[1572]: New session 25 of user core. Mar 13 01:16:18.231599 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 01:16:18.629901 sshd[4415]: Connection closed by 20.161.92.111 port 45562 Mar 13 01:16:18.630812 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:18.635888 systemd-logind[1572]: Session 25 logged out. Waiting for processes to exit. Mar 13 01:16:18.636265 systemd[1]: sshd@22-10.230.35.114:22-20.161.92.111:45562.service: Deactivated successfully. Mar 13 01:16:18.639478 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 01:16:18.643847 systemd-logind[1572]: Removed session 25. Mar 13 01:16:23.740161 systemd[1]: Started sshd@23-10.230.35.114:22-20.161.92.111:57232.service - OpenSSH per-connection server daemon (20.161.92.111:57232). Mar 13 01:16:24.278919 sshd[4429]: Accepted publickey for core from 20.161.92.111 port 57232 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:24.280677 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:24.288006 systemd-logind[1572]: New session 26 of user core. Mar 13 01:16:24.306548 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 01:16:24.651508 sshd[4432]: Connection closed by 20.161.92.111 port 57232 Mar 13 01:16:24.650635 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:24.657782 systemd-logind[1572]: Session 26 logged out. Waiting for processes to exit. Mar 13 01:16:24.659224 systemd[1]: sshd@23-10.230.35.114:22-20.161.92.111:57232.service: Deactivated successfully. Mar 13 01:16:24.662258 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 01:16:24.664701 systemd-logind[1572]: Removed session 26. Mar 13 01:16:24.758662 systemd[1]: Started sshd@24-10.230.35.114:22-20.161.92.111:57234.service - OpenSSH per-connection server daemon (20.161.92.111:57234). Mar 13 01:16:25.284315 sshd[4444]: Accepted publickey for core from 20.161.92.111 port 57234 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:25.285653 sshd-session[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:25.293967 systemd-logind[1572]: New session 27 of user core. Mar 13 01:16:25.299517 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 13 01:16:28.249686 containerd[1592]: time="2026-03-13T01:16:28.249624328Z" level=info msg="StopContainer for \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" with timeout 30 (s)" Mar 13 01:16:28.265518 containerd[1592]: time="2026-03-13T01:16:28.265473747Z" level=info msg="Stop container \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" with signal terminated" Mar 13 01:16:28.289614 systemd[1]: cri-containerd-a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e.scope: Deactivated successfully. Mar 13 01:16:28.294120 containerd[1592]: time="2026-03-13T01:16:28.293942932Z" level=info msg="received container exit event container_id:\"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" id:\"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" pid:3564 exited_at:{seconds:1773364588 nanos:292808748}" Mar 13 01:16:28.312107 containerd[1592]: time="2026-03-13T01:16:28.312037297Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 01:16:28.316553 containerd[1592]: time="2026-03-13T01:16:28.316373419Z" level=info msg="StopContainer for \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" with timeout 2 (s)" Mar 13 01:16:28.317398 containerd[1592]: time="2026-03-13T01:16:28.317351923Z" level=info msg="Stop container \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" with signal terminated" Mar 13 01:16:28.328612 systemd-networkd[1493]: lxc_health: Link DOWN Mar 13 01:16:28.328625 systemd-networkd[1493]: lxc_health: Lost carrier Mar 13 01:16:28.361863 systemd[1]: cri-containerd-c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313.scope: Deactivated successfully. Mar 13 01:16:28.362316 systemd[1]: cri-containerd-c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313.scope: Consumed 10.362s CPU time, 202.1M memory peak, 81.1M read from disk, 13.3M written to disk. Mar 13 01:16:28.366762 containerd[1592]: time="2026-03-13T01:16:28.366686168Z" level=info msg="received container exit event container_id:\"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" id:\"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" pid:3485 exited_at:{seconds:1773364588 nanos:364464015}" Mar 13 01:16:28.379052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e-rootfs.mount: Deactivated successfully. Mar 13 01:16:28.386620 containerd[1592]: time="2026-03-13T01:16:28.386515842Z" level=info msg="StopContainer for \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" returns successfully" Mar 13 01:16:28.388291 containerd[1592]: time="2026-03-13T01:16:28.388247210Z" level=info msg="StopPodSandbox for \"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\"" Mar 13 01:16:28.394505 containerd[1592]: time="2026-03-13T01:16:28.394304968Z" level=info msg="Container to stop \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 01:16:28.409496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313-rootfs.mount: Deactivated successfully. Mar 13 01:16:28.418338 containerd[1592]: time="2026-03-13T01:16:28.418293274Z" level=info msg="StopContainer for \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" returns successfully" Mar 13 01:16:28.419318 systemd[1]: cri-containerd-cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c.scope: Deactivated successfully. Mar 13 01:16:28.421755 containerd[1592]: time="2026-03-13T01:16:28.421708044Z" level=info msg="StopPodSandbox for \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\"" Mar 13 01:16:28.421852 containerd[1592]: time="2026-03-13T01:16:28.421813695Z" level=info msg="Container to stop \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 01:16:28.421852 containerd[1592]: time="2026-03-13T01:16:28.421835418Z" level=info msg="Container to stop \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 01:16:28.421852 containerd[1592]: time="2026-03-13T01:16:28.421849618Z" level=info msg="Container to stop \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 01:16:28.421986 containerd[1592]: time="2026-03-13T01:16:28.421862765Z" level=info msg="Container to stop \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 01:16:28.421986 containerd[1592]: time="2026-03-13T01:16:28.421874860Z" level=info msg="Container to stop \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 13 01:16:28.425217 containerd[1592]: time="2026-03-13T01:16:28.425130401Z" level=info msg="received sandbox exit event container_id:\"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\" id:\"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\" exit_status:137 exited_at:{seconds:1773364588 nanos:424576641}" monitor_name=podsandbox Mar 13 01:16:28.434526 systemd[1]: cri-containerd-4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26.scope: Deactivated successfully. Mar 13 01:16:28.441129 containerd[1592]: time="2026-03-13T01:16:28.441068669Z" level=info msg="received sandbox exit event container_id:\"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" id:\"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" exit_status:137 exited_at:{seconds:1773364588 nanos:440564244}" monitor_name=podsandbox Mar 13 01:16:28.469938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c-rootfs.mount: Deactivated successfully. Mar 13 01:16:28.475852 containerd[1592]: time="2026-03-13T01:16:28.475694283Z" level=info msg="shim disconnected" id=cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c namespace=k8s.io Mar 13 01:16:28.476152 containerd[1592]: time="2026-03-13T01:16:28.476103814Z" level=warning msg="cleaning up after shim disconnected" id=cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c namespace=k8s.io Mar 13 01:16:28.484955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26-rootfs.mount: Deactivated successfully. Mar 13 01:16:28.488473 containerd[1592]: time="2026-03-13T01:16:28.476247546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 01:16:28.489701 containerd[1592]: time="2026-03-13T01:16:28.489665026Z" level=info msg="shim disconnected" id=4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26 namespace=k8s.io Mar 13 01:16:28.489787 containerd[1592]: time="2026-03-13T01:16:28.489700737Z" level=warning msg="cleaning up after shim disconnected" id=4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26 namespace=k8s.io Mar 13 01:16:28.489787 containerd[1592]: time="2026-03-13T01:16:28.489714589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 13 01:16:28.538749 containerd[1592]: time="2026-03-13T01:16:28.537774557Z" level=info msg="TearDown network for sandbox \"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\" successfully" Mar 13 01:16:28.538749 containerd[1592]: time="2026-03-13T01:16:28.537834620Z" level=info msg="StopPodSandbox for \"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\" returns successfully" Mar 13 01:16:28.539121 containerd[1592]: time="2026-03-13T01:16:28.539080884Z" level=info msg="received sandbox container exit event sandbox_id:\"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" exit_status:137 exited_at:{seconds:1773364588 nanos:440564244}" monitor_name=criService Mar 13 01:16:28.540366 containerd[1592]: time="2026-03-13T01:16:28.540326854Z" level=info msg="received sandbox container exit event sandbox_id:\"cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c\" exit_status:137 exited_at:{seconds:1773364588 nanos:424576641}" monitor_name=criService Mar 13 01:16:28.540761 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26-shm.mount: Deactivated successfully. Mar 13 01:16:28.541450 containerd[1592]: time="2026-03-13T01:16:28.539120769Z" level=info msg="TearDown network for sandbox \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" successfully" Mar 13 01:16:28.541567 containerd[1592]: time="2026-03-13T01:16:28.541541495Z" level=info msg="StopPodSandbox for \"4bbe5ea986a76e1f7415e391aed80f80d29e8cc41ccdb56b3e252343eb5e7e26\" returns successfully" Mar 13 01:16:28.702319 kubelet[2884]: I0313 01:16:28.702201 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-run\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703634 kubelet[2884]: I0313 01:16:28.702369 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85rr\" (UniqueName: \"kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-kube-api-access-q85rr\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703634 kubelet[2884]: I0313 01:16:28.702418 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cni-path\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703634 kubelet[2884]: I0313 01:16:28.702443 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hostproc\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703634 kubelet[2884]: I0313 01:16:28.702611 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.703634 kubelet[2884]: I0313 01:16:28.703438 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-bpf-maps\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703634 kubelet[2884]: I0313 01:16:28.703484 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-config-path\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703903 kubelet[2884]: I0313 01:16:28.703510 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-kernel\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703903 kubelet[2884]: I0313 01:16:28.703537 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-lib-modules\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703903 kubelet[2884]: I0313 01:16:28.703566 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6912676-8faa-4d91-ba8a-9fd858e089d2-cilium-config-path\") pod \"f6912676-8faa-4d91-ba8a-9fd858e089d2\" (UID: \"f6912676-8faa-4d91-ba8a-9fd858e089d2\") " Mar 13 01:16:28.703903 kubelet[2884]: I0313 01:16:28.703592 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-etc-cni-netd\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.703903 kubelet[2884]: I0313 01:16:28.703619 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4l2m\" (UniqueName: \"kubernetes.io/projected/f6912676-8faa-4d91-ba8a-9fd858e089d2-kube-api-access-w4l2m\") pod \"f6912676-8faa-4d91-ba8a-9fd858e089d2\" (UID: \"f6912676-8faa-4d91-ba8a-9fd858e089d2\") " Mar 13 01:16:28.703903 kubelet[2884]: I0313 01:16:28.703660 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hubble-tls\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.704165 kubelet[2884]: I0313 01:16:28.703684 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-cgroup\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.704165 kubelet[2884]: I0313 01:16:28.703709 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-xtables-lock\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.704165 kubelet[2884]: I0313 01:16:28.703737 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-clustermesh-secrets\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.704165 kubelet[2884]: I0313 01:16:28.703763 2884 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-net\") pod \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\" (UID: \"b5c3832e-bd72-469f-b5dc-0001a17cf6b0\") " Mar 13 01:16:28.704165 kubelet[2884]: I0313 01:16:28.703825 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.707052 kubelet[2884]: I0313 01:16:28.703861 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.707052 kubelet[2884]: I0313 01:16:28.703888 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.707052 kubelet[2884]: I0313 01:16:28.703913 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.709082 kubelet[2884]: I0313 01:16:28.708695 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 01:16:28.709082 kubelet[2884]: I0313 01:16:28.708761 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.709082 kubelet[2884]: I0313 01:16:28.708802 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.712909 kubelet[2884]: I0313 01:16:28.712841 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6912676-8faa-4d91-ba8a-9fd858e089d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6912676-8faa-4d91-ba8a-9fd858e089d2" (UID: "f6912676-8faa-4d91-ba8a-9fd858e089d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 01:16:28.712909 kubelet[2884]: I0313 01:16:28.712901 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.716840 kubelet[2884]: I0313 01:16:28.716455 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-kube-api-access-q85rr" (OuterVolumeSpecName: "kube-api-access-q85rr") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "kube-api-access-q85rr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 01:16:28.716840 kubelet[2884]: I0313 01:16:28.716505 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6912676-8faa-4d91-ba8a-9fd858e089d2-kube-api-access-w4l2m" (OuterVolumeSpecName: "kube-api-access-w4l2m") pod "f6912676-8faa-4d91-ba8a-9fd858e089d2" (UID: "f6912676-8faa-4d91-ba8a-9fd858e089d2"). InnerVolumeSpecName "kube-api-access-w4l2m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 01:16:28.716840 kubelet[2884]: I0313 01:16:28.716523 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.716840 kubelet[2884]: I0313 01:16:28.716551 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 13 01:16:28.719725 kubelet[2884]: I0313 01:16:28.719653 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 01:16:28.720351 kubelet[2884]: I0313 01:16:28.720316 2884 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5c3832e-bd72-469f-b5dc-0001a17cf6b0" (UID: "b5c3832e-bd72-469f-b5dc-0001a17cf6b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.804812 2884 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hostproc\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.804889 2884 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-bpf-maps\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.804912 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-config-path\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.804929 2884 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-kernel\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.804977 2884 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-lib-modules\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.805004 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6912676-8faa-4d91-ba8a-9fd858e089d2-cilium-config-path\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.805019 2884 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-etc-cni-netd\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805354 kubelet[2884]: I0313 01:16:28.805037 2884 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w4l2m\" (UniqueName: \"kubernetes.io/projected/f6912676-8faa-4d91-ba8a-9fd858e089d2-kube-api-access-w4l2m\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805052 2884 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-hubble-tls\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805075 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-cgroup\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805088 2884 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-xtables-lock\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805102 2884 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-clustermesh-secrets\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805117 2884 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-host-proc-sys-net\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805140 2884 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cilium-run\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805155 2884 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q85rr\" (UniqueName: \"kubernetes.io/projected/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-kube-api-access-q85rr\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:28.805793 kubelet[2884]: I0313 01:16:28.805172 2884 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5c3832e-bd72-469f-b5dc-0001a17cf6b0-cni-path\") on node \"srv-1hh7x.gb1.brightbox.com\" DevicePath \"\"" Mar 13 01:16:29.097861 kubelet[2884]: I0313 01:16:29.097581 2884 scope.go:117] "RemoveContainer" containerID="c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313" Mar 13 01:16:29.103307 containerd[1592]: time="2026-03-13T01:16:29.102106276Z" level=info msg="RemoveContainer for \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\"" Mar 13 01:16:29.109123 systemd[1]: Removed slice kubepods-burstable-podb5c3832e_bd72_469f_b5dc_0001a17cf6b0.slice - libcontainer container kubepods-burstable-podb5c3832e_bd72_469f_b5dc_0001a17cf6b0.slice. Mar 13 01:16:29.109335 systemd[1]: kubepods-burstable-podb5c3832e_bd72_469f_b5dc_0001a17cf6b0.slice: Consumed 10.520s CPU time, 202.5M memory peak, 81.1M read from disk, 13.3M written to disk. Mar 13 01:16:29.120641 systemd[1]: Removed slice kubepods-besteffort-podf6912676_8faa_4d91_ba8a_9fd858e089d2.slice - libcontainer container kubepods-besteffort-podf6912676_8faa_4d91_ba8a_9fd858e089d2.slice. Mar 13 01:16:29.126015 containerd[1592]: time="2026-03-13T01:16:29.125944902Z" level=info msg="RemoveContainer for \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" returns successfully" Mar 13 01:16:29.126673 kubelet[2884]: I0313 01:16:29.126642 2884 scope.go:117] "RemoveContainer" containerID="d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a" Mar 13 01:16:29.129004 containerd[1592]: time="2026-03-13T01:16:29.128923452Z" level=info msg="RemoveContainer for \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\"" Mar 13 01:16:29.137223 containerd[1592]: time="2026-03-13T01:16:29.137159548Z" level=info msg="RemoveContainer for \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\" returns successfully" Mar 13 01:16:29.138733 kubelet[2884]: I0313 01:16:29.138671 2884 scope.go:117] "RemoveContainer" containerID="b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771" Mar 13 01:16:29.145428 containerd[1592]: time="2026-03-13T01:16:29.145309116Z" level=info msg="RemoveContainer for \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\"" Mar 13 01:16:29.153365 containerd[1592]: time="2026-03-13T01:16:29.153253661Z" level=info msg="RemoveContainer for \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\" returns successfully" Mar 13 01:16:29.154503 kubelet[2884]: I0313 01:16:29.154397 2884 scope.go:117] "RemoveContainer" containerID="8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81" Mar 13 01:16:29.156578 containerd[1592]: time="2026-03-13T01:16:29.156437561Z" level=info msg="RemoveContainer for \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\"" Mar 13 01:16:29.160781 containerd[1592]: time="2026-03-13T01:16:29.160740717Z" level=info msg="RemoveContainer for \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\" returns successfully" Mar 13 01:16:29.161108 kubelet[2884]: I0313 01:16:29.161074 2884 scope.go:117] "RemoveContainer" containerID="aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702" Mar 13 01:16:29.164145 containerd[1592]: time="2026-03-13T01:16:29.164111844Z" level=info msg="RemoveContainer for \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\"" Mar 13 01:16:29.168656 containerd[1592]: time="2026-03-13T01:16:29.168619937Z" level=info msg="RemoveContainer for \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\" returns successfully" Mar 13 01:16:29.168980 kubelet[2884]: I0313 01:16:29.168859 2884 scope.go:117] "RemoveContainer" containerID="c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313" Mar 13 01:16:29.169351 containerd[1592]: time="2026-03-13T01:16:29.169206793Z" level=error msg="ContainerStatus for \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\": not found" Mar 13 01:16:29.169941 kubelet[2884]: E0313 01:16:29.169837 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\": not found" containerID="c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313" Mar 13 01:16:29.170058 kubelet[2884]: I0313 01:16:29.169906 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313"} err="failed to get container status \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1afde5e76f7fd7f743b78c5186189711fc1b6774b2928a73c0299f89a8cb313\": not found" Mar 13 01:16:29.170058 kubelet[2884]: I0313 01:16:29.169990 2884 scope.go:117] "RemoveContainer" containerID="d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a" Mar 13 01:16:29.170259 containerd[1592]: time="2026-03-13T01:16:29.170229708Z" level=error msg="ContainerStatus for \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\": not found" Mar 13 01:16:29.179560 kubelet[2884]: E0313 01:16:29.179390 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\": not found" containerID="d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a" Mar 13 01:16:29.179560 kubelet[2884]: I0313 01:16:29.179430 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a"} err="failed to get container status \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2bd82e3b61c712fc256a2baddab175975d1c03f55d68e3a7f4a3494e010457a\": not found" Mar 13 01:16:29.179560 kubelet[2884]: I0313 01:16:29.179455 2884 scope.go:117] "RemoveContainer" containerID="b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771" Mar 13 01:16:29.179764 containerd[1592]: time="2026-03-13T01:16:29.179652502Z" level=error msg="ContainerStatus for \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\": not found" Mar 13 01:16:29.180101 kubelet[2884]: E0313 01:16:29.179905 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\": not found" containerID="b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771" Mar 13 01:16:29.180101 kubelet[2884]: I0313 01:16:29.179976 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771"} err="failed to get container status \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\": rpc error: code = NotFound desc = an error occurred when try to find container \"b39d3a17b1b91700a6d9e58b3dd96241e986b35bca1773f70615db79c956e771\": not found" Mar 13 01:16:29.180101 kubelet[2884]: I0313 01:16:29.180015 2884 scope.go:117] "RemoveContainer" containerID="8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81" Mar 13 01:16:29.180621 containerd[1592]: time="2026-03-13T01:16:29.180566662Z" level=error msg="ContainerStatus for \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\": not found" Mar 13 01:16:29.181084 kubelet[2884]: E0313 01:16:29.181040 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\": not found" containerID="8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81" Mar 13 01:16:29.181226 kubelet[2884]: I0313 01:16:29.181080 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81"} err="failed to get container status \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\": rpc error: code = NotFound desc = an error occurred when try to find container \"8cb437919cee42afd4d4d4b9832db7af541887cd93be6c01ee4e872783665e81\": not found" Mar 13 01:16:29.181226 kubelet[2884]: I0313 01:16:29.181219 2884 scope.go:117] "RemoveContainer" containerID="aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702" Mar 13 01:16:29.181663 containerd[1592]: time="2026-03-13T01:16:29.181625626Z" level=error msg="ContainerStatus for \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\": not found" Mar 13 01:16:29.182049 kubelet[2884]: E0313 01:16:29.181997 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\": not found" containerID="aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702" Mar 13 01:16:29.182187 kubelet[2884]: I0313 01:16:29.182157 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702"} err="failed to get container status \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\": rpc error: code = NotFound desc = an error occurred when try to find container \"aacd8481c886e009dbf3809901ef4cf8ce083da45975c4ec027e0ce7e6869702\": not found" Mar 13 01:16:29.182382 kubelet[2884]: I0313 01:16:29.182307 2884 scope.go:117] "RemoveContainer" containerID="a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e" Mar 13 01:16:29.186558 containerd[1592]: time="2026-03-13T01:16:29.186515289Z" level=info msg="RemoveContainer for \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\"" Mar 13 01:16:29.191027 containerd[1592]: time="2026-03-13T01:16:29.190969519Z" level=info msg="RemoveContainer for \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" returns successfully" Mar 13 01:16:29.191426 kubelet[2884]: I0313 01:16:29.191340 2884 scope.go:117] "RemoveContainer" containerID="a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e" Mar 13 01:16:29.191930 containerd[1592]: time="2026-03-13T01:16:29.191812472Z" level=error msg="ContainerStatus for \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\": not found" Mar 13 01:16:29.192152 kubelet[2884]: E0313 01:16:29.192095 2884 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\": not found" containerID="a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e" Mar 13 01:16:29.192152 kubelet[2884]: I0313 01:16:29.192145 2884 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e"} err="failed to get container status \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\": rpc error: code = NotFound desc = an error occurred when try to find container \"a371bb7f4a8e669ecc047f7546e69d724aa668cb16010d33f8f839398dc7728e\": not found" Mar 13 01:16:29.375915 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cef92ebf2aacc4bde1369039a5e641ddc83c4aac8c704e957fe38cdee40ced9c-shm.mount: Deactivated successfully. Mar 13 01:16:29.376105 systemd[1]: var-lib-kubelet-pods-f6912676\x2d8faa\x2d4d91\x2dba8a\x2d9fd858e089d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw4l2m.mount: Deactivated successfully. Mar 13 01:16:29.376211 systemd[1]: var-lib-kubelet-pods-b5c3832e\x2dbd72\x2d469f\x2db5dc\x2d0001a17cf6b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq85rr.mount: Deactivated successfully. Mar 13 01:16:29.377921 systemd[1]: var-lib-kubelet-pods-b5c3832e\x2dbd72\x2d469f\x2db5dc\x2d0001a17cf6b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 13 01:16:29.378041 systemd[1]: var-lib-kubelet-pods-b5c3832e\x2dbd72\x2d469f\x2db5dc\x2d0001a17cf6b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 13 01:16:30.240099 sshd[4447]: Connection closed by 20.161.92.111 port 57234 Mar 13 01:16:30.242823 sshd-session[4444]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:30.259061 systemd[1]: sshd@24-10.230.35.114:22-20.161.92.111:57234.service: Deactivated successfully. Mar 13 01:16:30.261424 systemd[1]: session-27.scope: Deactivated successfully. Mar 13 01:16:30.261764 systemd[1]: session-27.scope: Consumed 2.001s CPU time, 29.4M memory peak. Mar 13 01:16:30.262782 systemd-logind[1572]: Session 27 logged out. Waiting for processes to exit. Mar 13 01:16:30.265471 systemd-logind[1572]: Removed session 27. Mar 13 01:16:30.351549 systemd[1]: Started sshd@25-10.230.35.114:22-20.161.92.111:36336.service - OpenSSH per-connection server daemon (20.161.92.111:36336). Mar 13 01:16:30.599304 kubelet[2884]: I0313 01:16:30.598808 2884 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c3832e-bd72-469f-b5dc-0001a17cf6b0" path="/var/lib/kubelet/pods/b5c3832e-bd72-469f-b5dc-0001a17cf6b0/volumes" Mar 13 01:16:30.601074 kubelet[2884]: I0313 01:16:30.600612 2884 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6912676-8faa-4d91-ba8a-9fd858e089d2" path="/var/lib/kubelet/pods/f6912676-8faa-4d91-ba8a-9fd858e089d2/volumes" Mar 13 01:16:30.877049 sshd[4588]: Accepted publickey for core from 20.161.92.111 port 36336 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:30.878740 sshd-session[4588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:30.888322 systemd-logind[1572]: New session 28 of user core. Mar 13 01:16:30.893529 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 13 01:16:31.761499 kubelet[2884]: E0313 01:16:31.761383 2884 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 13 01:16:31.771964 systemd[1]: Created slice kubepods-burstable-pod71dcf229_bf00_45ff_93e5_4dfc89137dab.slice - libcontainer container kubepods-burstable-pod71dcf229_bf00_45ff_93e5_4dfc89137dab.slice. Mar 13 01:16:31.810141 sshd[4591]: Connection closed by 20.161.92.111 port 36336 Mar 13 01:16:31.811020 sshd-session[4588]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:31.820577 systemd[1]: sshd@25-10.230.35.114:22-20.161.92.111:36336.service: Deactivated successfully. Mar 13 01:16:31.826543 systemd[1]: session-28.scope: Deactivated successfully. Mar 13 01:16:31.830188 systemd-logind[1572]: Session 28 logged out. Waiting for processes to exit. Mar 13 01:16:31.833810 systemd-logind[1572]: Removed session 28. Mar 13 01:16:31.911509 systemd[1]: Started sshd@26-10.230.35.114:22-20.161.92.111:36352.service - OpenSSH per-connection server daemon (20.161.92.111:36352). Mar 13 01:16:31.933977 kubelet[2884]: I0313 01:16:31.933876 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71dcf229-bf00-45ff-93e5-4dfc89137dab-cilium-config-path\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.933977 kubelet[2884]: I0313 01:16:31.933965 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-hostproc\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934108 kubelet[2884]: I0313 01:16:31.934016 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-cilium-cgroup\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934108 kubelet[2884]: I0313 01:16:31.934048 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/71dcf229-bf00-45ff-93e5-4dfc89137dab-clustermesh-secrets\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934108 kubelet[2884]: I0313 01:16:31.934074 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-etc-cni-netd\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934108 kubelet[2884]: I0313 01:16:31.934100 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-host-proc-sys-kernel\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934375 kubelet[2884]: I0313 01:16:31.934137 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-cni-path\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934375 kubelet[2884]: I0313 01:16:31.934161 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/71dcf229-bf00-45ff-93e5-4dfc89137dab-cilium-ipsec-secrets\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934375 kubelet[2884]: I0313 01:16:31.934190 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-host-proc-sys-net\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934375 kubelet[2884]: I0313 01:16:31.934217 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/71dcf229-bf00-45ff-93e5-4dfc89137dab-hubble-tls\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934375 kubelet[2884]: I0313 01:16:31.934241 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-cilium-run\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934829 kubelet[2884]: I0313 01:16:31.934456 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-bpf-maps\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934829 kubelet[2884]: I0313 01:16:31.934511 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8smcl\" (UniqueName: \"kubernetes.io/projected/71dcf229-bf00-45ff-93e5-4dfc89137dab-kube-api-access-8smcl\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934829 kubelet[2884]: I0313 01:16:31.934581 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-xtables-lock\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:31.934829 kubelet[2884]: I0313 01:16:31.934624 2884 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71dcf229-bf00-45ff-93e5-4dfc89137dab-lib-modules\") pod \"cilium-5fpgc\" (UID: \"71dcf229-bf00-45ff-93e5-4dfc89137dab\") " pod="kube-system/cilium-5fpgc" Mar 13 01:16:32.079611 containerd[1592]: time="2026-03-13T01:16:32.079489657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fpgc,Uid:71dcf229-bf00-45ff-93e5-4dfc89137dab,Namespace:kube-system,Attempt:0,}" Mar 13 01:16:32.103593 containerd[1592]: time="2026-03-13T01:16:32.103492904Z" level=info msg="connecting to shim ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42" address="unix:///run/containerd/s/9cdad0c9b106a2c5defdf630cc297f5ae07d850ae10b5596aed768f10ea50a98" namespace=k8s.io protocol=ttrpc version=3 Mar 13 01:16:32.136548 systemd[1]: Started cri-containerd-ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42.scope - libcontainer container ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42. Mar 13 01:16:32.191561 containerd[1592]: time="2026-03-13T01:16:32.191512145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5fpgc,Uid:71dcf229-bf00-45ff-93e5-4dfc89137dab,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\"" Mar 13 01:16:32.199055 containerd[1592]: time="2026-03-13T01:16:32.199018252Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 13 01:16:32.206203 containerd[1592]: time="2026-03-13T01:16:32.206151489Z" level=info msg="Container d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:16:32.211526 containerd[1592]: time="2026-03-13T01:16:32.211480763Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127\"" Mar 13 01:16:32.212091 containerd[1592]: time="2026-03-13T01:16:32.212048270Z" level=info msg="StartContainer for \"d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127\"" Mar 13 01:16:32.214459 containerd[1592]: time="2026-03-13T01:16:32.214411251Z" level=info msg="connecting to shim d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127" address="unix:///run/containerd/s/9cdad0c9b106a2c5defdf630cc297f5ae07d850ae10b5596aed768f10ea50a98" protocol=ttrpc version=3 Mar 13 01:16:32.246544 systemd[1]: Started cri-containerd-d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127.scope - libcontainer container d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127. Mar 13 01:16:32.299809 containerd[1592]: time="2026-03-13T01:16:32.299755336Z" level=info msg="StartContainer for \"d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127\" returns successfully" Mar 13 01:16:32.315609 systemd[1]: cri-containerd-d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127.scope: Deactivated successfully. Mar 13 01:16:32.316010 systemd[1]: cri-containerd-d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127.scope: Consumed 34ms CPU time, 9M memory peak, 2.6M read from disk. Mar 13 01:16:32.319912 containerd[1592]: time="2026-03-13T01:16:32.319777846Z" level=info msg="received container exit event container_id:\"d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127\" id:\"d067dd1aad1939e8bb5b02394933099e9009fb1e60f354223fc57e6273948127\" pid:4666 exited_at:{seconds:1773364592 nanos:318614950}" Mar 13 01:16:32.521097 sshd[4601]: Accepted publickey for core from 20.161.92.111 port 36352 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:32.522804 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:32.530824 systemd-logind[1572]: New session 29 of user core. Mar 13 01:16:32.539818 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 13 01:16:32.790988 sshd[4698]: Connection closed by 20.161.92.111 port 36352 Mar 13 01:16:32.792093 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:32.799871 systemd[1]: sshd@26-10.230.35.114:22-20.161.92.111:36352.service: Deactivated successfully. Mar 13 01:16:32.803256 systemd[1]: session-29.scope: Deactivated successfully. Mar 13 01:16:32.805566 systemd-logind[1572]: Session 29 logged out. Waiting for processes to exit. Mar 13 01:16:32.808740 systemd-logind[1572]: Removed session 29. Mar 13 01:16:32.891035 systemd[1]: Started sshd@27-10.230.35.114:22-20.161.92.111:36366.service - OpenSSH per-connection server daemon (20.161.92.111:36366). Mar 13 01:16:33.130168 containerd[1592]: time="2026-03-13T01:16:33.130030914Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 13 01:16:33.143669 containerd[1592]: time="2026-03-13T01:16:33.143510248Z" level=info msg="Container 0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:16:33.151363 containerd[1592]: time="2026-03-13T01:16:33.151318456Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1\"" Mar 13 01:16:33.152506 containerd[1592]: time="2026-03-13T01:16:33.152472602Z" level=info msg="StartContainer for \"0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1\"" Mar 13 01:16:33.153807 containerd[1592]: time="2026-03-13T01:16:33.153768745Z" level=info msg="connecting to shim 0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1" address="unix:///run/containerd/s/9cdad0c9b106a2c5defdf630cc297f5ae07d850ae10b5596aed768f10ea50a98" protocol=ttrpc version=3 Mar 13 01:16:33.189549 systemd[1]: Started cri-containerd-0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1.scope - libcontainer container 0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1. Mar 13 01:16:33.236935 containerd[1592]: time="2026-03-13T01:16:33.236880337Z" level=info msg="StartContainer for \"0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1\" returns successfully" Mar 13 01:16:33.254175 systemd[1]: cri-containerd-0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1.scope: Deactivated successfully. Mar 13 01:16:33.254634 systemd[1]: cri-containerd-0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1.scope: Consumed 30ms CPU time, 7.5M memory peak, 2.1M read from disk. Mar 13 01:16:33.258839 containerd[1592]: time="2026-03-13T01:16:33.258697909Z" level=info msg="received container exit event container_id:\"0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1\" id:\"0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1\" pid:4721 exited_at:{seconds:1773364593 nanos:257777258}" Mar 13 01:16:33.292597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cfb321de8290cb54825f58f3881a94885afeb40fe9dc80ee2738efcb6d999e1-rootfs.mount: Deactivated successfully. Mar 13 01:16:33.394508 sshd[4705]: Accepted publickey for core from 20.161.92.111 port 36366 ssh2: RSA SHA256:hm429P+TX+Ex43UNU1B3y8+MqFp75xtA6UPwehKFPZY Mar 13 01:16:33.396264 sshd-session[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 01:16:33.403487 systemd-logind[1572]: New session 30 of user core. Mar 13 01:16:33.408490 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 13 01:16:34.136991 containerd[1592]: time="2026-03-13T01:16:34.136383584Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 13 01:16:34.152524 containerd[1592]: time="2026-03-13T01:16:34.149954966Z" level=info msg="Container b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:16:34.167922 containerd[1592]: time="2026-03-13T01:16:34.167858506Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38\"" Mar 13 01:16:34.169503 containerd[1592]: time="2026-03-13T01:16:34.169469915Z" level=info msg="StartContainer for \"b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38\"" Mar 13 01:16:34.172389 containerd[1592]: time="2026-03-13T01:16:34.172344025Z" level=info msg="connecting to shim b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38" address="unix:///run/containerd/s/9cdad0c9b106a2c5defdf630cc297f5ae07d850ae10b5596aed768f10ea50a98" protocol=ttrpc version=3 Mar 13 01:16:34.208591 systemd[1]: Started cri-containerd-b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38.scope - libcontainer container b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38. Mar 13 01:16:34.325481 containerd[1592]: time="2026-03-13T01:16:34.325424611Z" level=info msg="StartContainer for \"b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38\" returns successfully" Mar 13 01:16:34.332833 systemd[1]: cri-containerd-b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38.scope: Deactivated successfully. Mar 13 01:16:34.336648 containerd[1592]: time="2026-03-13T01:16:34.336462622Z" level=info msg="received container exit event container_id:\"b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38\" id:\"b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38\" pid:4772 exited_at:{seconds:1773364594 nanos:335979066}" Mar 13 01:16:34.366162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b57c4e3e927993e71d92996e793f736cbc2133b65873fd280226bba1fe6ffb38-rootfs.mount: Deactivated successfully. Mar 13 01:16:35.143121 containerd[1592]: time="2026-03-13T01:16:35.142464081Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 13 01:16:35.154078 containerd[1592]: time="2026-03-13T01:16:35.153078698Z" level=info msg="Container 405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:16:35.166529 containerd[1592]: time="2026-03-13T01:16:35.166493600Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120\"" Mar 13 01:16:35.168281 containerd[1592]: time="2026-03-13T01:16:35.168214991Z" level=info msg="StartContainer for \"405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120\"" Mar 13 01:16:35.169762 containerd[1592]: time="2026-03-13T01:16:35.169710137Z" level=info msg="connecting to shim 405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120" address="unix:///run/containerd/s/9cdad0c9b106a2c5defdf630cc297f5ae07d850ae10b5596aed768f10ea50a98" protocol=ttrpc version=3 Mar 13 01:16:35.209494 systemd[1]: Started cri-containerd-405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120.scope - libcontainer container 405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120. Mar 13 01:16:35.255213 systemd[1]: cri-containerd-405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120.scope: Deactivated successfully. Mar 13 01:16:35.258290 containerd[1592]: time="2026-03-13T01:16:35.257978446Z" level=info msg="received container exit event container_id:\"405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120\" id:\"405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120\" pid:4810 exited_at:{seconds:1773364595 nanos:257255334}" Mar 13 01:16:35.260811 containerd[1592]: time="2026-03-13T01:16:35.260696542Z" level=info msg="StartContainer for \"405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120\" returns successfully" Mar 13 01:16:35.292465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-405d0b3eec6de12c0bc68208f2ae7122e6574c57318fd8a071715e6a6d414120-rootfs.mount: Deactivated successfully. Mar 13 01:16:36.151473 containerd[1592]: time="2026-03-13T01:16:36.151386466Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 13 01:16:36.173321 containerd[1592]: time="2026-03-13T01:16:36.173118729Z" level=info msg="Container b591b61abd24c8bdb32644c152f591caf52ba094173037f58504fe5e4aed1cb1: CDI devices from CRI Config.CDIDevices: []" Mar 13 01:16:36.185109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532982151.mount: Deactivated successfully. Mar 13 01:16:36.194947 containerd[1592]: time="2026-03-13T01:16:36.194898327Z" level=info msg="CreateContainer within sandbox \"ae9d6fc8fdda49026ab390404e8ec47aa9bf63009e8fedef9c29b6075d99dc42\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b591b61abd24c8bdb32644c152f591caf52ba094173037f58504fe5e4aed1cb1\"" Mar 13 01:16:36.197073 containerd[1592]: time="2026-03-13T01:16:36.197012776Z" level=info msg="StartContainer for \"b591b61abd24c8bdb32644c152f591caf52ba094173037f58504fe5e4aed1cb1\"" Mar 13 01:16:36.199613 containerd[1592]: time="2026-03-13T01:16:36.199566494Z" level=info msg="connecting to shim b591b61abd24c8bdb32644c152f591caf52ba094173037f58504fe5e4aed1cb1" address="unix:///run/containerd/s/9cdad0c9b106a2c5defdf630cc297f5ae07d850ae10b5596aed768f10ea50a98" protocol=ttrpc version=3 Mar 13 01:16:36.236468 systemd[1]: Started cri-containerd-b591b61abd24c8bdb32644c152f591caf52ba094173037f58504fe5e4aed1cb1.scope - libcontainer container b591b61abd24c8bdb32644c152f591caf52ba094173037f58504fe5e4aed1cb1. Mar 13 01:16:36.313481 containerd[1592]: time="2026-03-13T01:16:36.313382503Z" level=info msg="StartContainer for \"b591b61abd24c8bdb32644c152f591caf52ba094173037f58504fe5e4aed1cb1\" returns successfully" Mar 13 01:16:37.094294 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Mar 13 01:16:40.903053 systemd-networkd[1493]: lxc_health: Link UP Mar 13 01:16:40.913370 systemd-networkd[1493]: lxc_health: Gained carrier Mar 13 01:16:42.043383 systemd-networkd[1493]: lxc_health: Gained IPv6LL Mar 13 01:16:42.108464 kubelet[2884]: I0313 01:16:42.108143 2884 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5fpgc" podStartSLOduration=11.108102452 podStartE2EDuration="11.108102452s" podCreationTimestamp="2026-03-13 01:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:16:37.182016431 +0000 UTC m=+140.872747826" watchObservedRunningTime="2026-03-13 01:16:42.108102452 +0000 UTC m=+145.798833817" Mar 13 01:16:47.271338 sshd[4752]: Connection closed by 20.161.92.111 port 36366 Mar 13 01:16:47.270958 sshd-session[4705]: pam_unix(sshd:session): session closed for user core Mar 13 01:16:47.278328 systemd[1]: sshd@27-10.230.35.114:22-20.161.92.111:36366.service: Deactivated successfully. Mar 13 01:16:47.282374 systemd[1]: session-30.scope: Deactivated successfully. Mar 13 01:16:47.284816 systemd-logind[1572]: Session 30 logged out. Waiting for processes to exit. Mar 13 01:16:47.287885 systemd-logind[1572]: Removed session 30.